Sample records for point source method

  1. A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Ke; Miller, M. Coleman

    The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less

  2. Using CSLD Method to Calculate COD Pollution Load of Wei River Watershed above Huaxian Section, China

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Song, JinXi; Liu, WanQing

    2017-12-01

    Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.

  3. Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Song, JinXi; Liu, WanQing

    2018-02-01

    Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.

  4. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  5. Vector image method for the derivation of elastostatic solutions for point sources in a plane layered medium. Part 1: Derivation and simple examples

    NASA Technical Reports Server (NTRS)

    Fares, Nabil; Li, Victor C.

    1986-01-01

    An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.

  6. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour

    PubMed Central

    Moranda, Arianna

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328

  7. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour.

    PubMed

    Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.

  8. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    NASA Astrophysics Data System (ADS)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  9. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  10. Innovations in the Analysis of Chandra-ACIS Observations

    NASA Astrophysics Data System (ADS)

    Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.

    2010-05-01

    As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.

  11. A selective array activation method for the generation of a focused source considering listening position.

    PubMed

    Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann

    2012-02-01

    A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America

  12. An efficient method to compute microlensed light curves for point sources

    NASA Technical Reports Server (NTRS)

    Witt, Hans J.

    1993-01-01

    We present a method to compute microlensed light curves for point sources. This method has the general advantage that all microimages contributing to the light curve are found. While a source moves along a straight line, all micro images are located either on the primary image track or on the secondary image tracks (loops). The primary image track extends from - infinity to + infinity and is made of many sequents which are continuously connected. All the secondary image tracks (loops) begin and end on the lensing point masses. The method can be applied to any microlensing situation with point masses in the deflector plane, even for the overcritical case and surface densities close to the critical. Furthermore, we present general rules to evaluate the light curve for a straight track arbitrary placed in the caustic network of a sample of many point masses.

  13. A source-attractor approach to network detection of radiation sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Barry, M. L..; Grieme, M.

    Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less

  14. Nonstationary time series analysis of surface water microbial pathogen population dynamics using cointegration methods

    EPA Science Inventory

    Background/Question/Methods Bacterial pathogens in surface water present disease risks to aquatic communities and for human recreational activities. Sources of these pathogens include runoff from urban, suburban, and agricultural point and non-point sources, but hazardous micr...

  15. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less

  16. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    NASA Astrophysics Data System (ADS)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  17. Source splitting via the point source method

    NASA Astrophysics Data System (ADS)

    Potthast, Roland; Fazi, Filippo M.; Nelson, Philip A.

    2010-04-01

    We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119-40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731-42). The task is to separate the sound fields uj, j = 1, ..., n of n \\in \\mathbb {N} sound sources supported in different bounded domains G1, ..., Gn in \\mathbb {R}^3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u1 + sdotsdotsdot + un on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g_1, \\ldots, g_n, n\\in \\mathbb {N} , to construct uell for ell = 1, ..., n from u|Λ in the form u_{\\ell }(x) = \\int _{\\Lambda } g_{\\ell,x}(y) u(y) {\\,\\rm d}s(y), \\qquad \\ell =1,\\ldots, n. We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online.

  18. Multiple window spatial registration error of a gamma camera: 133Ba point source as a replacement of the NEMA procedure.

    PubMed

    Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M

    2008-12-09

    The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.

  19. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm.

    PubMed

    Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran

    2015-10-01

    Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.

  20. DNA BASED MOLECULAR METHODS FOR BACTERIAL SOURCE TRACKING IN WATERSHEDS

    EPA Science Inventory

    Point and non-point pollution sources of fecal pollution on a watershed adversely impact the quality of drinking source waters and recreational waters. States are required to develop total maximum daily loads (TMDLs) and devise best management practices (BMPs) to reduce the po...

  1. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  2. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  3. An improved DPSM technique for modelling ultrasonic fields in cracked solids

    NASA Astrophysics Data System (ADS)

    Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique

    2007-04-01

    In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.

  4. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  5. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir

    PubMed Central

    Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-01-01

    Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061

  6. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  7. Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Liu, WanQing

    2018-02-01

    TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.

  8. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  9. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  10. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    2010-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  11. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir.

    PubMed

    Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-02-01

    The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.

  12. Fast generation of complex modulation video holograms using temporal redundancy compression and hybrid point-source/wave-field approaches

    NASA Astrophysics Data System (ADS)

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2015-09-01

    The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.

  13. 3D Seismic Imaging using Marchenko Methods

    NASA Astrophysics Data System (ADS)

    Lomas, A.; Curtis, A.

    2017-12-01

    Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.

  14. Convex Hull Aided Registration Method (CHARM).

    PubMed

    Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian

    2017-09-01

    Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.

  15. Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu

    2015-05-01

    Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less

  16. Monitor-based evaluation of pollutant load from urban stormwater runoff in Beijing.

    PubMed

    Liu, Y; Che, W; Li, J

    2005-01-01

    As a major pollutant source to urban receiving waters, the non-point source pollution from urban runoff needs to be well studied and effectively controlled. Based on monitoring data from urban runoff pollutant sources, this article describes a systematic estimation of total pollutant loads from the urban areas of Beijing. A numerical model was developed to quantify main pollutant loads of urban runoff in Beijing. A sub-procedure is involved in this method, in which the flush process influences both the quantity and quality of stormwater runoff. A statistics-based method was applied in computing the annual pollutant load as an output of the runoff. The proportions of pollutant from point-source and non-point sources were compared. This provides a scientific basis for proper environmental input assessment of urban stormwater pollution to receiving waters, improvement of infrastructure performance, implementation of urban stormwater management, and utilization of stormwater.

  17. New method of a "point-like" neutron source creation based on sharp focusing of high-current deuteron beam onto deuterium-saturated target for neutron tomography

    NASA Astrophysics Data System (ADS)

    Golubev, S.; Skalyga, V.; Izotov, I.; Sidorov, A.

    2017-02-01

    A possibility of a compact powerful point-like neutron source creation is discussed. Neutron yield of the source based on deuterium-deuterium (D-D) reaction is estimated at the level of 1011 s-1 (1013 s-1 for deuterium-tritium reaction). The fusion takes place due to bombardment of deuterium- (or tritium) loaded target by high-current focused deuterium ion beam with energy of 100 keV. The ion beam is formed by means of high-current quasi-gasdynamic ion source of a new generation based on an electron cyclotron resonance (ECR) discharge in an open magnetic trap sustained by powerful microwave radiation. The prospects of proposed generator for neutron tomography are discussed. Suggested method is compared to the point-like neutron sources based on a spark produced by powerful femtosecond laser pulses.

  18. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  19. Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.

    PubMed

    Feng, Bing; Zeng, Gengsheng L

    2014-04-10

    A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.

  20. Adaptive CT scanning system

    DOEpatents

    Sampayan, Stephen E.

    2016-11-22

    Apparatus, systems, and methods that provide an X-ray interrogation system having a plurality of stationary X-ray point sources arranged to substantially encircle an area or space to be interrogated. A plurality of stationary detectors are arranged to substantially encircle the area or space to be interrogated, A controller is adapted to control the stationary X-ray point sources to emit X-rays one at a time, and to control the stationary detectors to detect the X-rays emitted by the stationary X-ray point sources.

  1. On the assessment of spatial resolution of PET systems with iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Cherry, Simon R.; Qi, Jinyi

    2016-03-01

    Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.

  2. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    PubMed

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  3. HerMES: point source catalogues from Herschel-SPIRE observations II

    NASA Astrophysics Data System (ADS)

    Wang, L.; Viero, M.; Clarke, C.; Bock, J.; Buat, V.; Conley, A.; Farrah, D.; Guo, K.; Heinis, S.; Magdis, G.; Marchetti, L.; Marsden, G.; Norberg, P.; Oliver, S. J.; Page, M. J.; Roehlly, Y.; Roseboom, I. G.; Schulz, B.; Smith, A. J.; Vaccari, M.; Zemcov, M.

    2014-11-01

    The Herschel Multi-tiered Extragalactic Survey (HerMES) is the largest Guaranteed Time Key Programme on the Herschel Space Observatory. With a wedding cake survey strategy, it consists of nested fields with varying depth and area totalling ˜380 deg2. In this paper, we present deep point source catalogues extracted from Herschel-Spectral and Photometric Imaging Receiver (SPIRE) observations of all HerMES fields, except for the later addition of the 270 deg2 HerMES Large-Mode Survey (HeLMS) field. These catalogues constitute the second Data Release (DR2) made in 2013 October. A sub-set of these catalogues, which consists of bright sources extracted from Herschel-SPIRE observations completed by 2010 May 1 (covering ˜74 deg2) were released earlier in the first extensive data release in 2012 March. Two different methods are used to generate the point source catalogues, the SUSSEXTRACTOR point source extractor used in two earlier data releases (EDR and EDR2) and a new source detection and photometry method. The latter combines an iterative source detection algorithm, STARFINDER, and a De-blended SPIRE Photometry algorithm. We use end-to-end Herschel-SPIRE simulations with realistic number counts and clustering properties to characterize basic properties of the point source catalogues, such as the completeness, reliability, photometric and positional accuracy. Over 500 000 catalogue entries in HerMES fields (except HeLMS) are released to the public through the HeDAM (Herschel Database in Marseille) website (http://hedam.lam.fr/HerMES).

  4. Multiband super-resolution imaging of graded-index photonic crystal flat lens

    NASA Astrophysics Data System (ADS)

    Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun

    2018-05-01

    Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.

  5. Waveform inversion of volcano-seismic signals for an extended source

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.

  6. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  7. Test method for telescopes using a point source at a finite distance

    NASA Technical Reports Server (NTRS)

    Griner, D. B.; Zissa, D. E.; Korsch, D.

    1985-01-01

    A test method for telescopes that makes use of a focused ring formed by an annular aperture when using a point source at a finite distance is evaluated theoretically and experimentally. The results show that the concept can be applied to near-normal, as well as grazing incidence. It is particularly suited for X-ray telescopes because of their intrinsically narrow annular apertures, and because of the largely reduced diffraction effects.

  8. Point spread functions for earthquake source imaging: An interpretation based on seismic interferometry

    USGS Publications Warehouse

    Nakahara, Hisashi; Haney, Matt

    2015-01-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  9. Irrigation scheduling as affected by field capacity and wilting point water content from different data sources

    USDA-ARS?s Scientific Manuscript database

    Soil water content at field capacity and wilting point water content is critical information for irrigation scheduling, regardless of soil water sensor-based method (SM) or evapotranspiration (ET)-based method. Both methods require knowledge on site-specific and soil-specific Management Allowable De...

  10. A double-correlation tremor-location method

    NASA Astrophysics Data System (ADS)

    Li, Ka Lok; Sgattoni, Giulia; Sadeghisorkhani, Hamzeh; Roberts, Roland; Gudmundsson, Olafur

    2017-02-01

    A double-correlation method is introduced to locate tremor sources based on stacks of complex, doubly-correlated tremor records of multiple triplets of seismographs back projected to hypothetical source locations in a geographic grid. Peaks in the resulting stack of moduli are inferred source locations. The stack of the moduli is a robust measure of energy radiated from a point source or point sources even when the velocity information is imprecise. Application to real data shows how double correlation focuses the source mapping compared to the common single correlation approach. Synthetic tests demonstrate the robustness of the method and its resolution limitations which are controlled by the station geometry, the finite frequency of the signal, the quality of the used velocity information and noise level. Both random noise and signal or noise correlated at time shifts that are inconsistent with the assumed velocity structure can be effectively suppressed. Assuming a surface wave velocity, we can constrain the source location even if the surface wave component does not dominate. The method can also in principle be used with body waves in 3-D, although this requires more data and seismographs placed near the source for depth resolution.

  11. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    NASA Astrophysics Data System (ADS)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  12. Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems

    NASA Astrophysics Data System (ADS)

    Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.

    2017-01-01

    A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.

  13. Design of TIR collimating lens for ordinary differential equation of extended light source

    NASA Astrophysics Data System (ADS)

    Zhan, Qianjing; Liu, Xiaoqin; Hou, Zaihong; Wu, Yi

    2017-10-01

    The source of LED has been widely used in our daily life. The intensity angle distribution of single LED is lambert distribution, which does not satisfy the requirement of people. Therefore, we need to distribute light and change the LED's intensity angle distribution. The most commonly method to change its intensity angle distribution is the free surface. Generally, using ordinary differential equations to calculate free surface can only be applied in a point source, but it will lead to a big error for the expand light. This paper proposes a LED collimating lens based on the ordinary differential equation, combined with the LED's light distribution curve, and adopt the method of calculating the center gravity of the extended light to get the normal vector. According to the law of Snell, the ordinary differential equations are constructed. Using the runge-kutta method for solution of ordinary differential equation solution, the curve point coordinates are gotten. Meanwhile, the edge point data of lens are imported into the optical simulation software TracePro. Based on 1mm×1mm single lambert body for light conditions, The degrees of collimating light can be close to +/-3. Furthermore, the energy utilization rate is higher than 85%. In this paper, the point light source is used to calculate partial differential equation method and compared with the simulation of the lens, which improve the effect of 1 degree of collimation.

  14. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    DOE PAGES

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less

  15. Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kipp, C. R.; Bernhard, R. J.

    1985-01-01

    A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.

  16. A method to analyze "source-sink" structure of non-point source pollution based on remote sensing technology.

    PubMed

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-11-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Multi-angle Indicators System of Non-point Pollution Source Assessment in Rural Areas: A Case Study Near Taihu Lake

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Ban, Jie; Han, Yu Ting; Yang, Jie; Bi, Jun

    2013-04-01

    This study aims to identify key environmental risk sources contributing to water eutrophication and to suggest certain risk management strategies for rural areas. The multi-angle indicators included in the risk source assessment system were non-point source pollution, deficient waste treatment, and public awareness of environmental risk, which combined psychometric paradigm methods, the contingent valuation method, and personal interviews to describe the environmental sensitivity of local residents. Total risk values of different villages near Taihu Lake were calculated in the case study, which resulted in a geographic risk map showing which village was the critical risk source of Taihu eutrophication. The increased application of phosphorus (P) and nitrogen (N), loss vulnerability of pollutant, and a lack of environmental risk awareness led to more serious non-point pollution, especially in rural China. Interesting results revealed by the quotient between the scores of objective risk sources and subjective risk sources showed what should be improved for each study village. More environmental investments, control of agricultural activities, and promotion of environmental education are critical considerations for rural environmental management. These findings are helpful for developing targeted and effective risk management strategies in rural areas.

  18. Theoretical evaluation of accuracy in position and size of brain activity obtained by near-infrared topography

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji

    2004-06-01

    Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.

  19. Searches for point sources in the Galactic Center region

    NASA Astrophysics Data System (ADS)

    di Mauro, Mattia; Fermi-LAT Collaboration

    2017-01-01

    Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.

  20. Improving the local wavenumber method by automatic DEXP transformation

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  1. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  2. THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au

    Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less

  3. Identifying and characterizing major emission point sources as a basis for geospatial distribution of mercury emissions inventories

    NASA Astrophysics Data System (ADS)

    Steenhuisen, Frits; Wilson, Simon J.

    2015-07-01

    Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national (geo-referenced) emission inventories and also to other resources that can be employed when such national inventories are lacking.

  4. Analyzing γ rays of the Galactic Center with deep learning

    NASA Astrophysics Data System (ADS)

    Caron, Sascha; Gómez-Vargas, Germán A.; Hendriks, Luc; Ruiz de Austri, Roberto

    2018-05-01

    We present the application of convolutional neural networks to a particular problem in gamma ray astronomy. Explicitly, we use this method to investigate the origin of an excess emission of GeV γ rays in the direction of the Galactic Center, reported by several groups by analyzing Fermi-LAT data. Interpretations of this excess include γ rays created by the annihilation of dark matter particles and γ rays originating from a collection of unresolved point sources, such as millisecond pulsars. We train and test convolutional neural networks with simulated Fermi-LAT images based on point and diffuse emission models of the Galactic Center tuned to measured γ ray data. Our new method allows precise measurements of the contribution and properties of an unresolved population of γ ray point sources in the interstellar diffuse emission model. The current model predicts the fraction of unresolved point sources with an error of up to 10% and this is expected to decrease with future work.

  5. Sound source localization on an axial fan at different operating points

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes

    2016-08-01

    A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.

  6. Time-frequency approach to underdetermined blind source separation.

    PubMed

    Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong

    2012-02-01

    This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.

  7. Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)

    1981-01-01

    The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.

  8. Location identification for indoor instantaneous point contaminant source by probability-based inverse Computational Fluid Dynamics modeling.

    PubMed

    Liu, X; Zhai, Z

    2008-02-01

    Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.

  9. Obtaining the phase in the star test using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Salazar Romero, Marcos A.; Vazquez-Montiel, Sergio; Cornejo-Rodriguez, Alejandro

    2004-10-01

    The star test is conceptually perhaps the most basic and simplest of all methods of testing image-forming optical systems, the irradiance distribution at the image of a point source (such as a star) is give for the Point Spread Function, PSF. The PSF is very sensitive to aberrations. One way to quantify the PSF is measuring the irradiance distribution on the image of the source point. On the other hand, if we know the aberrations introduced by the optical systems and utilizing the diffraction theory then we can calculate the PSF. In this work we propose a method in order to find the wavefront aberrations starting from the PSF, transforming the problem of fitting a polynomial of aberrations in a problem of optimization using Genetic Algorithm. Also, we show that this method is immune to the noise introduced in the register or recording of the image. Results of these methods are shown.

  10. Lidar method to estimate emission rates from extended sources

    USDA-ARS?s Scientific Manuscript database

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  11. A Method of Maximum Power Control in Single-phase Utility Interactive Photovoltaic Generation System by using PWM Current Source Inverter

    NASA Astrophysics Data System (ADS)

    Neba, Yasuhiko

    This paper deals with a maximum power point tracking (MPPT) control of the photovoltaic generation with the single-phase utility interactive inverter. The photovoltaic arrays are connected by employing the PWM current source inverter to the utility. The use of the pulsating dc current and voltage allows the maximum power point to be searched. The inverter can regulate the array voltage and keep the arrays to the maximum power. This paper gives the control method and the experimental results.

  12. Method for transporting impellent gases

    NASA Technical Reports Server (NTRS)

    Papst, H.

    1975-01-01

    The described system DAL comprises a method and a device for transportation of buoyant impellent gases, without the need for expensive pipes and liquid tankers. The gas is self air-lifted from its source to a consignment point by means of voluminous, light, hollow bodies. Upon release of the gas at the consignment point, the bodies are filled with another cheap buoyant gas (steam or heated air) for the return trip to the source. In both directions substantial quantities of supplementary freight goods can be transported. Requirements and advantages are presented.

  13. Hybrid Skyshine Calculations for Complex Neutron and Gamma-Ray Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J. Kenneth

    2000-10-15

    A two-step hybrid method is described for computationally efficient estimation of neutron and gamma-ray skyshine doses far from a shielded source. First, the energy and angular dependence of radiation escaping into the atmosphere from a source containment is determined by a detailed transport model such as MCNP. Then, an effective point source with this energy and angular dependence is used in the integral line-beam method to transport the radiation through the atmosphere up to 2500 m from the source. An example spent-fuel storage cask is analyzed with this hybrid method and compared to detailed MCNP skyshine calculations.

  14. Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.

    1981-01-01

    To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.

  15. Quantitative identification of riverine nitrogen from point, direct runoff and base flow sources.

    PubMed

    Huang, Hong; Zhang, Baifa; Lu, Jun

    2014-01-01

    We present a methodological example for quantifying the contributions of riverine total nitrogen (TN) from point, direct runoff and base flow sources by combining a recursive digital filter technique and statistical methods. First, we separated daily riverine flow into direct runoff and base flow using a recursive digital filter technique; then, a statistical model was established using daily simultaneous data for TN load, direct runoff rate, base flow rate, and temperature; and finally, the TN loading from direct runoff and base flow sources could be inversely estimated. As a case study, this approach was adopted to identify the TN source contributions in Changle River, eastern China. Results showed that, during 2005-2009, the total annual TN input to the river was 1,700.4±250.2 ton, and the contributions of point, direct runoff and base flow sources were 17.8±2.8%, 45.0±3.6%, and 37.2±3.9%, respectively. The innovation of the approach is that the nitrogen from direct runoff and base flow sources could be separately quantified. The approach is simple but detailed enough to take the major factors into account, providing an effective and reliable method for riverine nitrogen loading estimation and source apportionment.

  16. Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging.

    PubMed

    Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho

    2004-12-01

    A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.

  17. Analysis of an ultrasonically rotating droplet by moving particle semi-implicit and distributed point source method in a rotational coordinate

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2017-07-01

    Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.

  18. Point-source stochastic-method simulations of ground motions for the PEER NGA-East Project

    USGS Publications Warehouse

    Boore, David

    2015-01-01

    Ground-motions for the PEER NGA-East project were simulated using a point-source stochastic method. The simulated motions are provided for distances between of 0 and 1200 km, M from 4 to 8, and 25 ground-motion intensity measures: peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped pseudoabsolute response spectral acceleration (PSA) for 23 periods ranging from 0.01 s to 10.0 s. Tables of motions are provided for each of six attenuation models. The attenuation-model-dependent stress parameters used in the stochastic-method simulations were derived from inversion of PSA data from eight earthquakes in eastern North America.

  19. Analysis of non-point and point source pollution in China: case study in Shima Watershed in Guangdong Province

    NASA Astrophysics Data System (ADS)

    Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei

    2013-09-01

    China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.

  20. Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images

    PubMed Central

    D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora

    2010-01-01

    Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094

  1. Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI

    NASA Astrophysics Data System (ADS)

    Wahhaj, Zahed; Cieza, Lucas A.; Mawet, Dimitri; Yang, Bin; Canovas, Hector; de Boer, Jozua; Casassus, Simon; Ménard, François; Schreiber, Matthias R.; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Hayward, Thomas L.

    2015-09-01

    We present a new algorithm designed to improve the signal-to-noise ratio (S/N) of point and extended source detections around bright stars in direct imaging data.One of our innovations is that we insert simulated point sources into the science images, which we then try to recover with maximum S/N. This improves the S/N of real point sources elsewhere in the field. The algorithm, based on the locally optimized combination of images (LOCI) method, is called Matched LOCI or MLOCI. We show with Gemini Planet Imager (GPI) data on HD 135344 B and Near-Infrared Coronagraphic Imager (NICI) data on several stars that the new algorithm can improve the S/N of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. On the other hand, while non-blind applications may yield linear combinations of science images that seem to increase the S/N of true sources by a factor >2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marginal detections or to redetect point sources found in previous epochs. These findings are relevant to any method where the coefficients of the linear combination are considered tunable, e.g., LOCI and principal component analysis (PCA). Thus we recommend that false detection rates be analyzed when using these techniques. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  2. [Empirical study on non-point sources pollution based on landscape pattern & ecological processes theory: a case of soil water loss on the Loess Plateau in China].

    PubMed

    Suo, An-ning; Wang, Tian-ming; Wang, Hui; Yu, Bo; Ge, Jian-ping

    2006-12-01

    Non-point sources pollution is one of main pollution modes which pollutes the earth surface environment. Aimed at soil water loss (a typical non-point sources pollution problem) on the Losses Plateau in China, the paper applied a landscape patternevaluation method to twelve watersheds of Jinghe River Basin on the Loess Plateau by means of location-weighted landscape contrast index(LCI) and landscape slope index(LSI). The result showed that LSI of farm land, low density grass land, forest land and LCI responded significantly to soil erosion modulus and responded to depth of runoff, while the relationship between these landscape index and runoff variation index and erosion variation index were not statistically significant. This tell us LSI and LWLCI are good indicators of soil water loss and thus have big potential in non-point source pollution risk evaluation.

  3. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  4. The estimation of the load of non-point source nitrogen and phosphorus based on observation experiments and export coefficient method in Three Gorges Reservoir Area

    NASA Astrophysics Data System (ADS)

    Tong, X. X.; Hu, B.; Xu, W. S.; Liu, J. G.; Zhang, P. C.

    2017-12-01

    In this paper, Three Gorges Reservoir Area (TGRA) was chosen to be the study area, the export coefficients of different land-use type were calculated through the observation experiments and literature consultation, and then the load of non-point source (NPS) nitrogen and phosphorus of different pollution sources such as farmland pollution sources, decentralized livestock and poultry breeding pollution sources and domestic pollution sources were estimated. The results show as follows: the pollution load of dry land is the main source of farmland pollution. The order of total nitrogen load of different pollution sources from high to low is livestock breeding pollution, domestic pollution, land use pollution, while the order of phosphorus load of different pollution sources from high to low is land use pollution, livestock breeding pollution, domestic pollution, Therefore, reasonable farmland management, effective control methods of dry land fertilization and sewage discharge of livestock breeding are the keys to the prevention and control of NPS nitrogen and phosphorus in TGRA.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nitao, J J

    The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less

  6. [Regulation framework of watershed landscape pattern for non-point source pollution control based on 'source-sink' theory: A case study in the watershed of Maluan Bay, Xiamen City, China].

    PubMed

    Huang, Ning; Wang, Hong Ying; Lin, Tao; Liu, Qi Ming; Huang, Yun Feng; Li, Jian Xiong

    2016-10-01

    Watershed landscape pattern regulation and optimization based on 'source-sink' theory for non-point source pollution control is a cost-effective measure and still in the exploratory stage. Taking whole watershed as the research object, on the basis of landscape ecology, related theories and existing research results, a regulation framework of watershed landscape pattern for non-point source pollution control was developed at two levels based on 'source-sink' theory in this study: 1) at watershed level: reasonable basic combination and spatial pattern of 'source-sink' landscape was analyzed, and then holistic regulation and optimization method of landscape pattern was constructed; 2) at landscape patch level: key 'source' landscape was taken as the focus of regulation and optimization. Firstly, four identification criteria of key 'source' landscape including landscape pollutant loading per unit area, landscape slope, long and narrow transfer 'source' landscape, pollutant loading per unit length of 'source' landscape along the riverbank were developed. Secondly, nine types of regulation and optimization methods for different key 'source' landscape in rural and urban areas were established, according to three regulation and optimization rules including 'sink' landscape inlay, banding 'sink' landscape supplement, pollutants capacity of original 'sink' landscape enhancement. Finally, the regulation framework was applied for the watershed of Maluan Bay in Xiamen City. Holistic regulation and optimization mode of watershed landscape pattern of Maluan Bay and key 'source' landscape regulation and optimization measures for the three zones were made, based on GIS technology, remote sensing images and DEM model.

  7. The VLITE Post-Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.

    2018-01-01

    A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE; ) on the Karl G. Jansky Very Large Array (VLA). In contrast to other radio sky surveys, the commensal observing mode of VLITE results in varying depths, sensitivities, and spatial resolutions across the sky based on the configuration of the VLA, location on the sky, and time on source specified by the primary observer for their independent science objectives. Therefore, previously developed tools and methods for generating source catalogs and survey statistics are not always appropriate for VLITE's diverse and growing set of data. A raw catalog of point sources extracted from every VLITE image will be created from source fit parameters stored in a queryable database. Point sources will be measured using the Python Blob Detector and Source Finder software (PyBDSF; Mohan & Rafferty 2015). Sources in the raw catalog will be associated with previous VLITE detections in a resolution- and sensitivity-dependent manner, and cross-matched to other radio sky surveys to aid in the detection of transient and variable sources. Final data products will include separate, tiered point source catalogs grouped by sensitivity limit and spatial resolution.

  8. Combining stable isotopes with contamination indicators: A method for improved investigation of nitrate sources and dynamics in aquifers with mixed nitrogen inputs.

    PubMed

    Minet, E P; Goodhue, R; Meier-Augenstein, W; Kalin, R M; Fenton, O; Richards, K G; Coxon, C E

    2017-11-01

    Excessive nitrate (NO 3 - ) concentration in groundwater raises health and environmental issues that must be addressed by all European Union (EU) member states under the Nitrates Directive and the Water Framework Directive. The identification of NO 3 - sources is critical to efficiently control or reverse NO 3 - contamination that affects many aquifers. In that respect, the use of stable isotope ratios 15 N/ 14 N and 18 O/ 16 O in NO 3 - (expressed as δ 15 N-NO 3 - and δ 18 O-NO 3 - , respectively) has long shown its value. However, limitations exist in complex environments where multiple nitrogen (N) sources coexist. This two-year study explores a method for improved NO 3 - source investigation in a shallow unconfined aquifer with mixed N inputs and a long established NO 3 - problem. In this tillage-dominated area of free-draining soil and subsoil, suspected NO 3 - sources were diffuse applications of artificial fertiliser and organic point sources (septic tanks and farmyards). Bearing in mind that artificial diffuse sources were ubiquitous, groundwater samples were first classified according to a combination of two indicators relevant of point source contamination: presence/absence of organic point sources (i.e. septic tank and/or farmyard) near sampling wells and exceedance/non-exceedance of a contamination threshold value for sodium (Na + ) in groundwater. This classification identified three contamination groups: agricultural diffuse source but no point source (D+P-), agricultural diffuse and point source (D+P+) and agricultural diffuse but point source occurrence ambiguous (D+P±). Thereafter δ 15 N-NO 3 - and δ 18 O-NO 3 - data were superimposed on the classification. As δ 15 N-NO 3 - was plotted against δ 18 O-NO 3 - , comparisons were made between the different contamination groups. Overall, both δ variables were significantly and positively correlated (p < 0.0001, r s  = 0.599, slope of 0.5), which was indicative of denitrification. An inspection of the contamination groups revealed that denitrification did not occur in the absence of point source contamination (group D+P-). In fact, strong significant denitrification lines occurred only in the D+P+ and D+P± groups (p < 0.0001, r s  > 0.6, 0.53 ≤ slope ≤ 0.76), i.e. where point source contamination was characterised or suspected. These lines originated from the 2-6‰ range for δ 15 N-NO 3 - , which suggests that i) NO 3 - contamination was dominated by an agricultural diffuse N source (most likely the large organic matter pool that has incorporated 15 N-depleted nitrogen from artificial fertiliser in agricultural soils and whose nitrification is stimulated by ploughing and fertilisation) rather than point sources and ii) denitrification was possibly favoured by high dissolved organic content (DOC) from point sources. Combining contamination indicators and a large stable isotope dataset collected over a large study area could therefore improve our understanding of the NO 3 - contamination processes in groundwater for better land use management. We hypothesise that in future research, additional contamination indicators (e.g. pharmaceutical molecules) could also be combined to disentangle NO 3 - contamination from animal and human wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A new method of building footprints detection using airborne laser scanning data and multispectral image

    NASA Astrophysics Data System (ADS)

    Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin

    2010-10-01

    It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.

  10. [Spatial heterogeneity and classified control of agricultural non-point source pollution in Huaihe River Basin].

    PubMed

    Zhou, Liang; Xu, Jian-Gang; Sun, Dong-Qi; Ni, Tian-Hua

    2013-02-01

    Agricultural non-point source pollution is of importance in river deterioration. Thus identifying and concentrated controlling the key source-areas are the most effective approaches for non-point source pollution control. This study adopts inventory method to analysis four kinds of pollution sources and their emissions intensity of the chemical oxygen demand (COD), total nitrogen (TN), and total phosphorus (TP) in 173 counties (cities, districts) in Huaihe River Basin. The four pollution sources include livestock breeding, rural life, farmland cultivation, aquacultures. The paper mainly addresses identification of non-point polluted sensitivity areas, key pollution sources and its spatial distribution characteristics through cluster, sensitivity evaluation and spatial analysis. A geographic information system (GIS) and SPSS were used to carry out this study. The results show that: the COD, TN and TP emissions of agricultural non-point sources were 206.74 x 10(4) t, 66.49 x 10(4) t, 8.74 x 10(4) t separately in Huaihe River Basin in 2009; the emission intensity were 7.69, 2.47, 0.32 t.hm-2; the proportions of COD, TN, TP emissions were 73%, 24%, 3%. The paper achieves that: the major pollution source of COD, TN and TP was livestock breeding and rural life; the sensitivity areas and priority pollution control areas among the river basin of non-point source pollution are some sub-basins of the upper branches in Huaihe River, such as Shahe River, Yinghe River, Beiru River, Jialu River and Qingyi River; livestock breeding is the key pollution source in the priority pollution control areas. Finally, the paper concludes that pollution type of rural life has the highest pollution contribution rate, while comprehensive pollution is one type which is hard to control.

  11. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  12. A guide to differences between stochastic point-source and stochastic finite-fault simulations

    USGS Publications Warehouse

    Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.

    2009-01-01

    Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.

  13. LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources

    NASA Astrophysics Data System (ADS)

    Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin

    2017-12-01

    Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.

  14. Detection of ferromagnetic target based on mobile magnetic gradient tensor system

    NASA Astrophysics Data System (ADS)

    Gang, Y. I. N.; Yingtang, Zhang; Zhining, Li; Hongbo, Fan; Guoquan, Ren

    2016-03-01

    Attitude change of mobile magnetic gradient tensor system critically affects the precision of gradient measurements, thereby increasing ambiguity in target detection. This paper presents a rotational invariant-based method for locating and identifying ferromagnetic targets. Firstly, unit magnetic moment vector was derived based on the geometrical invariant, such that the intermediate eigenvector of the magnetic gradient tensor is perpendicular to the magnetic moment vector and the source-sensor displacement vector. Secondly, unit source-sensor displacement vector was derived based on the characteristic that the angle between magnetic moment vector and source-sensor displacement is a rotational invariant. By introducing a displacement vector between two measurement points, the magnetic moment vector and the source-sensor displacement vector were theoretically derived. To resolve the problem of measurement noises existing in the realistic detection applications, linear equations were formulated using invariants corresponding to several distinct measurement points and least square solution of magnetic moment vector and source-sensor displacement vector were obtained. Results of simulation and principal verification experiment showed the correctness of the analytical method, along with the practicability of the least square method.

  15. Potency backprojection

    NASA Astrophysics Data System (ADS)

    Okuwaki, R.; Kasahara, A.; Yagi, Y.

    2017-12-01

    The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.

  16. A Direction Finding Method with A 3-D Array Based on Aperture Synthesis

    NASA Astrophysics Data System (ADS)

    Li, Shiwen; Chen, Liangbing; Gao, Zhaozhao; Ma, Wenfeng

    2018-01-01

    Direction finding for electronic warfare application should provide a wider field of view as possible. But the maximum unambiguous field of view for conventional direction finding methods is a hemisphere. It cannot distinguish the direction of arrival of the signals from the back lobe of the array. In this paper, a full 3-D direction finding method based on aperture synthesis radiometry is proposed. The model of the direction finding system is illustrated, and the fundamentals are presented. The relationship between the outputs of the measurements of a 3-D array and the 3-D power distribution of the point sources can be represented by a 3-D Fourier transform, and then the 3-D power distribution of the point sources can be reconstructed by an inverse 3-D Fourier transform. And in order to display the 3-D power distribution of the point sources conveniently, the whole spherical distribution is represented by two 2-D circular distribution images, one of which is for the upper hemisphere, and the other is for the lower hemisphere. Then a numeric simulation is designed and conducted to demonstrate the feasibility of the method. The results show that the method can estimate the arbitrary direction of arrival of the signals in the 3-D space correctly.

  17. Acoustic emission source location in composite structure by Voronoi construction using geodesic curve evolution.

    PubMed

    Gangadharan, R; Prasanna, G; Bhat, M R; Murthy, C R L; Gopalakrishnan, S

    2009-11-01

    Conventional analytical/numerical methods employing triangulation technique are suitable for locating acoustic emission (AE) source in a planar structure without structural discontinuities. But these methods cannot be extended to structures with complicated geometry, and, also, the problem gets compounded if the material of the structure is anisotropic warranting complex analytical velocity models. A geodesic approach using Voronoi construction is proposed in this work to locate the AE source in a composite structure. The approach is based on the fact that the wave takes minimum energy path to travel from the source to any other point in the connected domain. The geodesics are computed on the meshed surface of the structure using graph theory based on Dijkstra's algorithm. By propagating the waves in reverse virtually from these sensors along the geodesic path and by locating the first intersection point of these waves, one can get the AE source location. In this work, the geodesic approach is shown more suitable for a practicable source location solution in a composite structure with arbitrary surface containing finite discontinuities. Experiments have been conducted on composite plate specimens of simple and complex geometry to validate this method.

  18. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    NASA Astrophysics Data System (ADS)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  19. SKYDOSE: A code for gamma skyshine calculations using the integral line-beam method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Brockhoff, R.C.

    1994-07-01

    SKYDOS evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated by three simple geometries: (1) a source in a silo; (2) a source behind an infinitely long, vertical, black wall; and (3) a source in a rectangular building. In all three geometries, an optical overhead shield may be specified. The source energy must be between 0.02 and 100 MeV (10 MeV for sources with an overhead shield). This is a user`s manual. Other references give more detail on the integral line-beam method used by SKYDOSE.

  20. Procedure for Separating Noise Sources in Measurements of Turbofan Engine Core Noise

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    The study of core noise from turbofan engines has become more important as noise from other sources like the fan and jet have been reduced. A multiple microphone and acoustic source modeling method to separate correlated and uncorrelated sources has been developed. The auto and cross spectrum in the frequency range below 1000 Hz is fitted with a noise propagation model based on a source couplet consisting of a single incoherent source with a single coherent source or a source triplet consisting of a single incoherent source with two coherent point sources. Examples are presented using data from a Pratt & Whitney PW4098 turbofan engine. The method works well.

  1. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less

  2. Light refocusing with up-scalable resonant waveguide gratings in confocal prolate spheroid arrangements

    NASA Astrophysics Data System (ADS)

    Quaranta, Giorgio; Basset, Guillaume; Benes, Zdenek; Martin, Olivier J. F.; Gallinet, Benjamin

    2018-01-01

    Resonant waveguide gratings (RWGs) are thin-film structures, where coupled modes interfere with the diffracted incoming wave and produce strong angular and spectral filtering. The combination of two finite-length and impedance matched RWGs allows the creation of a passive beam steering element, which is compatible with up-scalable fabrication processes. Here, we propose a design method to create large patterns of such elements able to filter, steer, and focus the light from one point source to another. The method is based on ellipsoidal mirrors to choose a system of confocal prolate spheroids where the two focal points are the source point and observation point, respectively. It allows finding the proper orientation and position of each RWG element of the pattern, such that the phase is constructively preserved at the observation point. The design techniques presented here could be implemented in a variety of systems, where large-scale patterns are needed, such as optical security, multifocal or monochromatic lenses, biosensors, and see-through optical combiners for near-eye displays.

  3. A New Source Biasing Approach in ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevill, Aaron M; Mosher, Scott W

    2012-01-01

    The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less

  4. Scaled SFS method for Lambertian surface 3D measurement under point source lighting.

    PubMed

    Ma, Long; Lyu, Yi; Pei, Xin; Hu, Yan Min; Sun, Feng Ming

    2018-05-28

    A Lambertian surface is a kind of very important assumption in shape from shading (SFS), which is widely used in many measurement cases. In this paper, a novel scaled SFS method is developed to measure the shape of a Lambertian surface with dimensions. In which, a more accurate light source model is investigated under the illumination of a simple point light source, the relationship between surface depth map and the recorded image grayscale is established by introducing the camera matrix into the model. Together with the constraints of brightness, smoothness and integrability, the surface shape with dimensions can be obtained by analyzing only one image using the scaled SFS method. The algorithm simulations show a perfect matching between the simulated structures and the results, the rebuilding root mean square error (RMSE) is below 0.6mm. Further experiment is performed by measuring a PVC tube internal surface, the overall measurement error lies below 2%.

  5. Comparison of dew point temperature estimation methods in Southwestern Georgia

    Treesearch

    Marcus D. Williams; Scott L. Goodrick; Andrew Grundstein; Marshall Shepherd

    2015-01-01

    Recent upward trends in acres irrigated have been linked to increasing near-surface moisture. Unfortunately, stations with dew point data for monitoring near-surface moisture are sparse. Thus, models that estimate dew points from more readily observed data sources are useful. Daily average dew temperatures were estimated and evaluated at 14 stations in...

  6. Method and apparatus for millimeter-wave detection of thermal waves for materials evaluation

    DOEpatents

    Gopalsami, Nachappa; Raptis, Apostolos C.

    1991-01-01

    A method and apparatus for generating thermal waves in a sample and for measuring thermal inhomogeneities at subsurface levels using millimeter-wave radiometry. An intensity modulated heating source is oriented toward a narrow spot on the surface of a material sample and thermal radiation in a narrow volume of material around the spot is monitored using a millimeter-wave radiometer; the radiometer scans the sample point-by-point and a computer stores and displays in-phase and quadrature phase components of thermal radiations for each point on the scan. Alternatively, an intensity modulated heating source is oriented toward a relatively large surface area in a material sample and variations in thermal radiation within the full field of an antenna array are obtained using an aperture synthesis radiometer technique.

  7. TU-AB-BRC-11: Moving a GPU-OpenCL-Based Monte Carlo (MC) Dose Engine Towards Routine Clinical Use: Automatic Beam Commissioning and Efficient Source Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Z; Folkerts, M; Jiang, S

    Purpose: We have previously developed a GPU-OpenCL-based MC dose engine named goMC with built-in analytical linac beam model. To move goMC towards routine clinical use, we have developed an automatic beam-commissioning method, and an efficient source sampling strategy to facilitate dose calculations for real treatment plans. Methods: Our commissioning method is to automatically adjust the relative weights among the sub-sources, through an optimization process minimizing the discrepancies between calculated dose and measurements. Six models built for Varian Truebeam linac photon beams (6MV, 10MV, 15MV, 18MV, 6MVFFF, 10MVFFF) were commissioned using measurement data acquired at our institution. To facilitate dose calculationsmore » for real treatment plans, we employed inverse sampling method to efficiently incorporate MLC leaf-sequencing into source sampling. Specifically, instead of sampling source particles control-point by control-point and rejecting the particles blocked by MLC, we assigned a control-point index to each sampled source particle, according to MLC leaf-open duration of each control-point at the pixel where the particle intersects the iso-center plane. Results: Our auto-commissioning method decreased distance-to-agreement (DTA) of depth dose at build-up regions by 36.2% averagely, making it within 1mm. Lateral profiles were better matched for all beams, with biggest improvement found at 15MV for which root-mean-square difference was reduced from 1.44% to 0.50%. Maximum differences of output factors were reduced to less than 0.7% for all beams, with largest decrease being from1.70% to 0.37% found at 10FFF. Our new sampling strategy was tested on a Head&Neck VMAT patient case. Achieving clinically acceptable accuracy, the new strategy could reduce the required history number by a factor of ∼2.8 given a statistical uncertainty level and hence achieve a similar speed-up factor. Conclusion: Our studies have demonstrated the feasibility and effectiveness of our auto-commissioning approach and new efficient source sampling strategy, implying the potential of our GPU-based MC dose engine goMC for routine clinical use.« less

  8. Structured background grids for generation of unstructured grids by advancing front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1991-01-01

    A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.

  9. A CMB foreground study in WMAP data: Extragalactic point sources and zodiacal light emission

    NASA Astrophysics Data System (ADS)

    Chen, Xi

    The Cosmic Microwave Background (CMB) radiation is the remnant heat from the Big Bang. It serves as a primary tool to understand the global properties, content and evolution of the universe. Since 2001, NASA's Wilkinson Microwave Anisotropy Probe (WMAP) satellite has been napping the full sky anisotropy with unprecedented accuracy, precision and reliability. The CMB angular power spectrum calculated from the WMAP full sky maps not only enables accurate testing of cosmological models, but also places significant constraints on model parameters. The CMB signal in the WMAP sky maps is contaminated by microwave emission from the Milky Way and from extragalactic sources. Therefore, in order to use the maps reliably for cosmological studies, the foreground signals must be well understood and removed from the maps. This thesis focuses on the separation of two foreground contaminants from the WMAP maps: extragalactic point sources and zodiacal light emission. Extragalactic point sources constitute the most important foreground on small angular scales. Various methods have been applied to the WMAP single frequency maps to extract sources. However, due to the limited angular resolution of WMAP, it is possible to confuse positive CMB excursions with point sources or miss sources that are embedded in negative CMB fluctuations. We present a novel CMB-free source finding technique that utilizes the spectrum difference of point sources and CMB to form internal linear combinations of multifrequency maps to suppress the CMB and better reveal sources. When applied to the WMAP 41, 64 and 94 GHz maps, this technique has not only enabled detection of sources that are previously cataloged by independent methods, but also allowed disclosure of new sources. Without the noise contribution from the CMB, this method responds rapidly with the integration time. The number of detections varies as 0( t 0.72 in the two-band search and 0( t 0.70 in the three-band search from one year to five years, separately, in comparison to t 0.40 from the WMAP catalogs. Our source catalogs are a good supplement to the existing WMAP source catalogs, and the method itself is proven to be both complementary to and competitive with all the current source finding techniques in WMAP maps. Scattered light and thermal emission from the interplanetary dust (IPD) within our Solar System are major contributors to the diffuse sky brightness at most infrared wavelengths. For wavelengths longer than 3.5 mm, the thermal emission of the IPD dominates over scattering, and the emission is often referred to as the Zodiacal Light Emission (ZLE). To set a limit of ZLE contribution to the WMAP data, we have performed a simultaneous fit of the yearly WMAP time-ordered data to the time variation of ZLE predicted by the DIRBE IPD model (Kelsallet al. 1998) evaluated at 240 mm, plus [cursive l] = 1 - 4 CMB components. It is found that although this fitting procedure can successfully recover the CMB dipole to a 0.5% accuracy, it is not sensitive enough to determine the ZLE signal nor the other multipole moments very accurately.

  10. Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Paiva, Kleber; Mantelli, Marcia

    2011-01-01

    The LHP operating temperature is governed by the saturation temperature of its reservoir. Controlling the reservoir saturation temperature is commonly accomplished by cold biasing the reservoir and using electrical heaters to provide the required control power. Using this method, the loop operating temperature can be controlled within +/- 0.5K. However, because of the thermal resistance that exists between the heat source and the LHP evaporator, the heat source temperature will vary with its heat output even if LHP operating temperature is kept constant. Since maintaining a constant heat source temperature is of most interest, a question often raised is whether the heat source temperature can be used for LHP set point temperature control. A test program with a miniature LHP has been carried out to investigate the effects on the LHP operation when the control temperature sensor is placed on the heat source instead of the reservoir. In these tests, the LHP reservoir is cold-biased and is heated by a control heater. Tests results show that it is feasible to use the heat source temperature for feedback control of the LHP operation. Using this method, the heat source temperature can be maintained within a tight range for moderate and high powers. At low powers, however, temperature oscillations may occur due to interactions among the reservoir control heater power, the heat source mass, and the heat output from the heat source. In addition, the heat source temperature could temporarily deviate from its set point during fast thermal transients. The implication is that more sophisticated feedback control algorithms need to be implemented for LHP transient operation when the heat source temperature is used for feedback control.

  11. Proceedings from the Workshop on Research Needs for Assessment and Management of Non-Point Air Emissions from Department of Defense Activities held in Research Triangle Park, North Carolina on 19-21 February 2008

    DTIC Science & Technology

    2008-10-01

    Chow, J.C. (2006). Feasibility of soil dust source apportionment by the pyrolysis-gas chromatography/mass spectrometry method. J. Air Waste Manage...receptor-oriented source apportionment models. • Develop monitoring methods to determine source and fence line amounts of fugitive dust emissions for...offsite impact, including evaluation with receptor- oriented source apportionment models 76 8.8.1 Background 76 8.8.2 Significance 77 8.8.3

  12. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  13. Laser fusion neutron source employing compression with short pulse lasers

    DOEpatents

    Sefcik, Joseph A; Wilks, Scott C

    2013-11-05

    A method and system for achieving fusion is provided. The method includes providing laser source that generates a laser beam and a target that includes a capsule embedded in the target and filled with DT gas. The laser beam is directed at the target. The laser beam helps create an electron beam within the target. The electron beam heats the capsule, the DT gas, and the area surrounding the capsule. At a certain point equilibrium is reached. At the equilibrium point, the capsule implodes and generates enough pressure on the DT gas to ignite the DT gas and fuse the DT gas nuclei.

  14. Nonpoint and Point Sources of Nitrogen in Major Watersheds of the United States

    USGS Publications Warehouse

    Puckett, Larry J.

    1994-01-01

    Estimates of nonpoint and point sources of nitrogen were made for 107 watersheds located in the U.S. Geological Survey's National Water-Quality Assessment Program study units throughout the conterminous United States. The proportions of nitrogen originating from fertilizer, manure, atmospheric deposition, sewage, and industrial sources were found to vary with climate, hydrologic conditions, land use, population, and physiography. Fertilizer sources of nitrogen are proportionally greater in agricultural areas of the West and the Midwest than in other parts of the Nation. Animal manure contributes large proportions of nitrogen in the South and parts of the Northeast. Atmospheric deposition of nitrogen is generally greatest in areas of greatest precipitation, such as the Northeast. Point sources (sewage and industrial) generally are predominant in watersheds near cities, where they may account for large proportions of the nitrogen in streams. The transport of nitrogen in streams increases as amounts of precipitation and runoff increase and is greatest in the Northeastern United States. Because no single nonpoint nitrogen source is dominant everywhere, approaches to control nitrogen must vary throughout the Nation. Watershed-based approaches to understanding nonpoint and point sources of contamination, as used by the National Water-Quality Assessment Program, will aid water-quality and environmental managers to devise methods to reduce nitrogen pollution.

  15. A Method Based on Wavelet Transforms for Source Detection in Photon-counting Detector Images. II. Application to ROSAT PSPC Images

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1997-07-01

    We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcellini, Paul A.; Bonnema, Eric; Goldwasser, David

    Building energy consumption can only be measured at the site or at the point of utility interconnection with a building. Often, to evaluate the total energy impact, this site-based energy consumption is translated into source energy, that is, the energy at the point of fuel extraction. Consistent with this approach, the U.S. Department of Energy's (DOE) definition of zero energy buildings uses source energy as the metric to account for energy losses from the extraction, transformation, and delivery of energy. Other organizations, as well, use source energy to characterize the energy impacts. Four methods of making the conversion from sitemore » energy to source energy were investigated in the context of the DOE definition of zero energy buildings. These methods were evaluated based on three guiding principles--improve energy efficiency, reduce and stabilize power demand, and use power from nonrenewable energy sources as efficiently as possible. This study examines relative trends between strategies as they are implemented on very low-energy buildings to achieve zero energy. A typical office building was modeled and variations to this model performed. The photovoltaic output that was required to create a zero energy building was calculated. Trends were examined with these variations to study the impacts of the calculation method on the building's ability to achieve zero energy status. The paper will highlight the different methods and give conclusions on the advantages and disadvantages of the methods studied.« less

  17. Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology

    NASA Astrophysics Data System (ADS)

    Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya

    A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.

  18. An international point source outbreak of typhoid fever: a European collaborative investigation*

    PubMed Central

    Stanwell-Smith, R. E.; Ward, L. R.

    1986-01-01

    A point source outbreak of Salmonella typhi, degraded Vi-strain 22, affecting 32 British visitors to Kos, Greece, in 1983 was attributed by a case—control study to the consumption of a salad at one hotel. This represents the first major outbreak of typhoid fever in which a salad has been identified as the vehicle. The source of the infection was probably a carrier in the hotel staff. The investigation demonstrates the importance of national surveillance, international cooperation, and epidemiological methods in the investigation and control of major outbreaks of infection. PMID:3488842

  19. A Comparative Analysis of Vibrio cholerae Contamination in Point-of-Drinking and Source Water in a Low-Income Urban Community, Bangladesh.

    PubMed

    Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M

    2018-01-01

    Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.

  20. Measurement of Phased Array Point Spread Functions for Use with Beamforming

    NASA Technical Reports Server (NTRS)

    Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis

    2011-01-01

    Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.

  1. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  2. The Use of Terrestrial Laser Scanning for Determining the Driver’s Field of Vision

    PubMed Central

    Zemánek, Tomáš; Cibulka, Miloš; Skoupil, Jaromír

    2017-01-01

    Terrestrial laser scanning (TLS) is currently one of the most progressively developed methods in obtaining information about objects and phenomena. This paper assesses the TLS possibilities in determining the driver’s field of vision in operating agricultural and forest machines with movable and immovable components in comparison to the method of using two light point sources for the creation of shade images according to ISO (International Organization for Standardization) 5721-1. Using the TLS method represents a minimum time saving of 55% or more, according to the project complexity. The values of shading ascertained by using the shadow cast method by the point light sources are generally overestimated and more distorted for small cabin structural components. The disadvantage of the TLS method is the scanner’s sensitivity to a soiled or scratched cabin windscreen and to the glass transparency impaired by heavy tinting. PMID:28902177

  3. An adaptive grid scheme using the boundary element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munipalli, R.; Anderson, D.A.

    1996-09-01

    A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less

  4. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  5. Detection of spatial fluctuations of non-point source fecal pollution in coral reef surrounding waters in southwestern Puerto Rico using PCR-based assays.

    PubMed

    Bonkosky, M; Hernández-Delgado, E A; Sandoz, B; Robledo, I E; Norat-Ramírez, J; Mattei, H

    2009-01-01

    Human fecal contamination of coral reefs is a major cause of concern. Conventional methods used to monitor microbial water quality cannot be used to discriminate between different fecal pollution sources. Fecal coliforms, enterococci, and human-specific Bacteroides (HF183, HF134), general Bacteroides-Prevotella (GB32), and Clostridium coccoides group (CP) 16S rDNA PCR assays were used to test for the presence of non-point source fecal contamination across the southwestern Puerto Rico shelf. Inshore waters were highly turbid, consistently receiving fecal pollution from variable sources, and showing the highest frequency of positive molecular marker signals. Signals were also detected at offshore waters in compliance with existing microbiological quality regulations. Phylogenetic analysis showed that most isolates were of human fecal origin. The geographic extent of non-point source fecal pollution was large and impacted extensive coral reef systems. This could have deleterious long-term impacts on public health, local fisheries and in tourism potential if not adequately addressed.

  6. Interferometric superlocalization of two incoherent optical point sources.

    PubMed

    Nair, Ranjith; Tsang, Mankei

    2016-02-22

    A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.

  7. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  8. An optimized inverse modelling method for determining the location and strength of a point source releasing airborne material in urban environment

    NASA Astrophysics Data System (ADS)

    Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos

    2017-12-01

    An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.

  9. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  10. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2011-06-01

    Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.

  11. Photogrammetric Method and Software for Stream Planform Identification

    NASA Astrophysics Data System (ADS)

    Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.

    2013-12-01

    Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points

  12. Probabilistic Analysis of Earthquake-Led Water Contamination: A Case of Sichuan, China

    NASA Astrophysics Data System (ADS)

    Yang, Yan; Li, Lin; Benjamin Zhan, F.; Zhuang, Yanhua

    2016-06-01

    The objective of this paper is to evaluate seismic-led point source and non-point source water pollution, under the seismic hazard of 10 % probability of exceedance in 50 years, and with the minimum value of the water quality standard in Sichuan, China. The soil conservation service curve number method of calculating the runoff depth in the single rainfall event combined with the seismic damage index were applied to estimate the potential degree of non-point source water pollution. To estimate the potential impact of point source water pollution, a comprehensive water pollution evaluation framework is constructed using a combination of Water Quality Index and Seismic Damage Index methods. The four key findings of this paper are: (1) The water catchment that has the highest factory concentration does not have the highest risk of non-point source water contamination induced by the outbreak of potential earthquake. (2) The water catchment that has the highest numbers of cumulative water pollutants types are typically located in the south western parts of Sichuan where the main river basins in the regions flow through. (3) The most common pollutants in sample factories studied is COD and NH3-N which are found in all catchments. The least common pollutant is pathogen—found present in W1 catchment which has the best rating in the water quality index. (4) Using water quality index as a standardization parameter, parallel comparisons is made among the 16 water catchments. Only catchment W1 reaches level II water quality status which has the rating of moderately polluted in events of earthquake induced water contamination. All other areas suffer from severe water contamination with multiple pollution sources. The results from the data model are significant to urban planning commissions and businesses to strategically choose their factory locations in order to minimize potential hazardous impact during the outbreak of earthquake.

  13. THE CHANDRA COSMOS SURVEY. I. OVERVIEW AND POINT SOURCE CATALOG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elvis, Martin; Civano, Francesca; Aldcroft, T. L.

    2009-09-01

    The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that has imaged the central 0.5 deg{sup 2} of the COSMOS field (centered at 10 {sup h}, +02 deg.) with an effective exposure of {approx}160 ks, and an outer 0.4 deg{sup 2} area with an effective exposure of {approx}80 ks. The limiting source detection depths are 1.9 x 10{sup -16} erg cm{sup -2} s{sup -1} in the soft (0.5-2 keV) band, 7.3 x 10{sup -16} erg cm{sup -2} s{sup -1} in the hard (2-10 keV) band, and 5.7 x 10{sup -16} erg cm{sup -2} s{sup -1} in themore » full (0.5-10 keV) band. Here we describe the strategy, design, and execution of the C-COSMOS survey, and present the catalog of 1761 point sources detected at a probability of being spurious of <2 x 10{sup -5} (1655 in the full, 1340 in the soft, and 1017 in the hard bands). By using a grid of 36 heavily ({approx}50%) overlapping pointing positions with the ACIS-I imager, a remarkably uniform ({+-}12%) exposure across the inner 0.5 deg{sup 2} field was obtained, leading to a sharply defined lower flux limit. The widely different point-spread functions obtained in each exposure at each point in the field required a novel source detection method, because of the overlapping tiling strategy, which is described in a companion paper. This method produced reliable sources down to a 7-12 counts, as verified by the resulting logN-logS curve, with subarcsecond positions, enabling optical and infrared identifications of virtually all sources, as reported in a second companion paper. The full catalog is described here in detail and is available online.« less

  14. A Comparative Analysis of Vibrio cholerae Contamination in Point-of-Drinking and Source Water in a Low-Income Urban Community, Bangladesh

    PubMed Central

    Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.

    2018-01-01

    Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005

  15. Mapping algorithm for freeform construction using non-ideal light sources

    NASA Astrophysics Data System (ADS)

    Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.

    2015-09-01

    Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.

  16. [Optic method of searching for acupuncture points and channels].

    PubMed

    Gertsik, G Ia; Zmievskoĭ, G N; Ivantsov, V I; Sang Min Li; Iu Byiung Kim; Gil Von Iun

    2001-01-01

    A procedure is proposed to search for acupuncture points and channels (APC) by space-sensitive recording of optical radiation diffusely reflected by surface (dermal and hypodermal) tissues of the body. For this purpose, the body surface is probed by low-intensity infrared radiation from a laser or noncoherent (light-emitting diodes) source by using a fiber-optic multichannel sensor. It is shown that it is most advisable to apply sources at wavelengths of 840-850 and 1260-1300 nm.

  17. Appraisal of an Array TEM Method in Detecting a Mined-Out Area Beneath a Conductive Layer

    NASA Astrophysics Data System (ADS)

    Li, Hai; Xue, Guo-qiang; Zhou, Nan-nan; Chen, Wei-ying

    2015-10-01

    The transient electromagnetic method has been extensively used for the detection of mined-out area in China for the past few years. In the cases that the mined-out area is overlain by a conductive layer, the detection of the target layer is difficult with a traditional loop source TEM method. In order to detect the target layer in this condition, this paper presents a newly developed array TEM method, which uses a grounded wire source. The underground current density distribution and the responses of the grounded wire source TEM configuration are modeled to demonstrate that the target layer is detectable in this condition. The 1D OCCAM inversion routine is applied to the synthetic single station data and common middle point gather. The result reveals that the electric source TEM method is capable of recovering the resistive target layer beneath the conductive overburden. By contrast, the conductive target layer cannot be recovered unless the distance between the target layer and the conductive overburden is large. Compared with inversion result of the single station data, the inversion of common middle point gather can better recover the resistivity of the target layer. Finally, a case study illustrates that the array TEM method is successfully applied in recovering a water-filled mined-out area beneath a conductive overburden.

  18. Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.

    NASA Astrophysics Data System (ADS)

    Dodd, Stirling Scott

    1995-01-01

    Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.

  19. Resolving the structure of the Galactic foreground using Herschel measurements and the Kriging technique

    NASA Astrophysics Data System (ADS)

    Pinter, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Zahorecz, S.; Tóth, L. V.

    2018-05-01

    Investigating the distant extragalactic Universe requires a subtraction of the Galactic foreground. One of the major difficulties deriving the fine structure of the galactic foreground is the embedded foreground and background point sources appearing in the given fields. It is especially so in the infrared. We report our study subtracting point sources from Herschel images with Kriging, an interpolation method where the interpolated values are modelled by a Gaussian process governed by prior covariances. Using the Kriging method on Herschel multi-wavelength observations the structure of the Galactic foreground can be studied with much higher resolution than previously, leading to a better foreground subtraction at the end.

  20. An iterative method for obtaining the optimum lightning location on a spherical surface

    NASA Technical Reports Server (NTRS)

    Chao, Gao; Qiming, MA

    1991-01-01

    A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.

  1. Deep Wideband Single Pointings and Mosaics in Radio Interferometry: How Accurately Do We Reconstruct Intensities and Spectral Indices of Faint Sources?

    NASA Astrophysics Data System (ADS)

    Rau, U.; Bhatnagar, S.; Owen, F. N.

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.

  2. Estimation of nutrient discharge from the Yangtze River to the East China Sea and the identification of nutrient sources.

    PubMed

    Tong, Yindong; Bu, Xiaoge; Chen, Junyue; Zhou, Feng; Chen, Long; Liu, Maodian; Tan, Xin; Yu, Tao; Zhang, Wei; Mi, Zhaorong; Ma, Lekuan; Wang, Xuejun; Ni, Jing

    2017-01-05

    Based on a time-series dataset and the mass balance method, the contributions of various sources to the nutrient discharges from the Yangtze River to the East China Sea are identified. The results indicate that the nutrient concentrations vary considerably among different sections of the Yangtze River. Non-point sources are an important source of nutrients to the Yangtze River, contributing about 36% and 63% of the nitrogen and phosphorus discharged into the East China Sea, respectively. Nutrient inputs from non-point sources vary among the sections of the Yangtze River, and the contributions of non-point sources increase from upstream to downstream. Considering the rice growing patterns in the Yangtze River Basin, the synchrony of rice tillering and the wet seasons might be an important cause of the high nutrient discharge from the non-point sources. Based on our calculations, a reduction of 0.99Tg per year in total nitrogen discharges from the Yangtze River would be needed to limit the occurrences of harmful algal blooms in the East China Sea to 15 times per year. The extensive construction of sewage treatment plants in urban areas may have only a limited effect on reducing the occurrences of harmful algal blooms in the future. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Characterizing Sorghum Panicles using 3D Point Clouds

    NASA Astrophysics Data System (ADS)

    Lonesome, M.; Popescu, S. C.; Horne, D. W.; Pugh, N. A.; Rooney, W.

    2017-12-01

    To address demands of population growth and impacts of global climate change, plant breeders must increase crop yield through genetic improvement. However, plant phenotyping, the characterization of a plant's physical attributes, remains a primary bottleneck in modern crop improvement programs. 3D point clouds generated from terrestrial laser scanning (TLS) and unmanned aerial systems (UAS) based structure from motion (SfM) are a promising data source to increase the efficiency of screening plant material in breeding programs. This study develops and evaluates methods for characterizing sorghum (Sorghum bicolor) panicles (heads) in field plots from both TLS and UAS-based SfM point clouds. The TLS point cloud over experimental sorghum field at Texas A&M farm in Burleston County TX were collected using a FARO Focus X330 3D laser scanner. SfM point cloud was generated from UAS imagery captured using a Phantom 3 Professional UAS at 10m altitude and 85% image overlap. The panicle detection method applies point cloud reflectance, height and point density attributes characteristic of sorghum panicles to detect them and estimate their dimensions (panicle length and width) through image classification and clustering procedures. We compare the derived panicle counts and panicle sizes with field-based and manually digitized measurements in selected plots and study the strengths and limitations of each data source for sorghum panicle characterization.

  4. Source counting in MEG neuroimaging

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.

    2009-02-01

    Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.

  5. A pollutant load hierarchical allocation method integrated in an environmental capacity management system for Zhushan Bay, Taihu Lake.

    PubMed

    Liang, Shidong; Jia, Haifeng; Yang, Cong; Melching, Charles; Yuan, Yongping

    2015-11-15

    An environmental capacity management (ECM) system was developed to help practically implement a Total Maximum Daily Load (TMDL) for a key bay in a highly eutrophic lake in China. The ECM system consists of a simulation platform for pollutant load calculation and a pollutant load hierarchical allocation (PLHA) system. The simulation platform was developed by linking the Environmental Fluid Dynamics Code (EFDC) and Water Quality Analysis Simulation Program (WASP). In the PLHA, pollutant loads were allocated top-down in several levels based on characteristics of the pollutant sources. Different allocation methods could be used for the different levels with the advantages of each method combined over the entire allocation. Zhushan Bay of Taihu Lake, one of the most eutrophic lakes in China, was selected as a case study. The allowable loads of total nitrogen, total phosphorus, ammonia, and chemical oxygen demand were found to be 2122.2, 94.9, 1230.4, and 5260.0 t·yr(-1), respectively. The PLHA for the case study consists of 5 levels. At level 0, loads are allocated to those from the lakeshore direct drainage, atmospheric deposition, internal release, and tributary inflows. At level 1 the loads allocated to tributary inflows are allocated to the 3 tributaries. At level 2, the loads allocated to one inflow tributary are allocated to upstream areas and local sources along the tributary. At level 3, the loads allocated to local sources are allocated to the point and non-point sources from different towns. At level 4, the loads allocated to non-point sources in each town are allocated to different villages. Compared with traditional forms of pollutant load allocation methods, PLHA can combine the advantages of different methods which put different priority weights on equity and efficiency, and the PLHA is easy to understand for stakeholders and more flexible to adjust when applied in practical cases. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Sampling Singular and Aggregate Point Sources of Carbon Dioxide from Space Using OCO-2

    NASA Astrophysics Data System (ADS)

    Schwandner, F. M.; Gunson, M. R.; Eldering, A.; Miller, C. E.; Nguyen, H.; Osterman, G. B.; Taylor, T.; O'Dell, C.; Carn, S. A.; Kahn, B. H.; Verhulst, K. R.; Crisp, D.; Pieri, D. C.; Linick, J.; Yuen, K.; Sanchez, R. M.; Ashok, M.

    2016-12-01

    Anthropogenic carbon dioxide (CO2) sources increasingly tip the natural balance between natural carbon sources and sinks. Space-borne measurements offer opportunities to detect and analyze point source emission signals anywhere on Earth. Singular continuous point source plumes from power plants or volcanoes turbulently mix into their proximal background fields. In contrast, plumes of aggregate point sources such as cities, and transportation or fossil fuel distribution networks, mix into each other and may therefore result in broader and more persistent excess signals of total column averaged CO2 (XCO2). NASA's first satellite dedicated to atmospheric CO2observation, the Orbiting Carbon Observatory-2 (OCO-2), launched in July 2014 and now leads the afternoon constellation of satellites (A-Train). While continuously collecting measurements in eight footprints across a narrow ( < 10 km) wide swath it occasionally cross-cuts coincident emission plumes. For singular point sources like volcanoes and coal fired power plants, we have developed OCO-2 data discovery tools and a proxy detection method for plumes using SO2-sensitive TIR imaging data (ASTER). This approach offers a path toward automating plume detections with subsequent matching and mining of OCO-2 data. We found several distinct singular source CO2signals. For aggregate point sources, we investigated whether OCO-2's multi-sounding swath observing geometry can reveal intra-urban spatial emission structures in the observed variability of XCO2 data. OCO-2 data demonstrate that we can detect localized excess XCO2 signals of 2 to 6 ppm against suburban and rural backgrounds. Compared to single-shot GOSAT soundings which detected urban/rural XCO2differences in megacities (Kort et al., 2012), the OCO-2 swath geometry opens up the path to future capabilities enabling urban characterization of greenhouse gases using hundreds of soundings over a city at each satellite overpass. California Institute of Technology

  7. A submerged singularity method for calculating potential flow velocities at arbitrary near-field points

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1976-01-01

    A discrete singularity method has been developed for calculating the potential flow around two-dimensional airfoils. The objective was to calculate velocities at any arbitrary point in the flow field, including points that approach the airfoil surface. That objective was achieved and is demonstrated here on a Joukowski airfoil. The method used combined vortices and sources ''submerged'' a small distance below the airfoil surface and incorporated a near-field subvortex technique developed earlier. When a velocity calculation point approached the airfoil surface, the number of discrete singularities effectively increased (but only locally) to keep the point just outside the error region of the submerged singularity discretization. The method could be extended to three dimensions, and should improve nonlinear methods, which calculate interference effects between multiple wings, and which include the effects of force-free trailing vortex sheets. The capability demonstrated here would extend the scope of such calculations to allow the close approach of wings and vortex sheets (or vortices).

  8. Novel fusion for hybrid optical/microcomputed tomography imaging based on natural light surface reconstruction and iterated closest point

    NASA Astrophysics Data System (ADS)

    Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo

    2014-02-01

    In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.

  9. Computer-Generated Microwave Holograms.

    ERIC Educational Resources Information Center

    Leming, Charles W.; Hastings, Orestes Patterson, III

    1980-01-01

    Described is the phasor method of superposition of waves. The intensity pattern from a system of microwave sources is calculated point by point on a plane corresponding to a film emulsion, and then printed and directly converted to a hologram for 3-cm microwaves. Calculations, construction, and viewing of holograms are included. (Author/DS)

  10. Transient pressure analysis of fractured well in bi-zonal gas reservoirs

    NASA Astrophysics Data System (ADS)

    Zhao, Yu-Long; Zhang, Lie-Hui; Liu, Yong-hui; Hu, Shu-Yong; Liu, Qi-Guo

    2015-05-01

    For hydraulic fractured well, how to evaluate the properties of fracture and formation are always tough jobs and it is very complex to use the conventional method to do that, especially for partially penetrating fractured well. Although the source function is a very powerful tool to analyze the transient pressure for complex structure well, the corresponding reports on gas reservoir are rare. In this paper, the continuous point source functions in anisotropic reservoirs are derived on the basis of source function theory, Laplace transform method and Duhamel principle. Application of construction method, the continuous point source functions in bi-zonal gas reservoir with closed upper and lower boundaries are obtained. Sequentially, the physical models and transient pressure solutions are developed for fully and partially penetrating fractured vertical wells in this reservoir. Type curves of dimensionless pseudo-pressure and its derivative as function of dimensionless time are plotted as well by numerical inversion algorithm, and the flow periods and sensitive factors are also analyzed. The source functions and solutions of fractured well have both theoretical and practical application in well test interpretation for such gas reservoirs, especial for the well with stimulated reservoir volume around the well in unconventional gas reservoir by massive hydraulic fracturing which always can be described with the composite model.

  11. Sources of spurious force oscillations from an immersed boundary method for moving-body problems

    NASA Astrophysics Data System (ADS)

    Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo

    2011-04-01

    When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.

  12. Analysing seismic-source mechanisms by linear-programming methods.

    USGS Publications Warehouse

    Julian, B.R.

    1986-01-01

    Linear-programming methods are powerful and efficient tools for objectively analysing seismic focal mechanisms and are applicable to a wide range of problems, including tsunami warning and nuclear explosion identification. The source mechanism is represented as a point in the 6-D space of moment-tensor components. The present method can easily be extended to fit observed seismic-wave amplitudes (either signed or absolute) subject to polarity constraints, and to assess the range of mechanisms consistent with a set of measured amplitudes. -from Author

  13. Opendf - An Implementation of the Dual Fermion Method for Strongly Correlated Systems

    NASA Astrophysics Data System (ADS)

    Antipov, Andrey E.; LeBlanc, James P. F.; Gull, Emanuel

    The dual fermion method is a multiscale approach for solving lattice problems of interacting strongly correlated systems. In this paper, we present the opendfcode, an open-source implementation of the dual fermion method applicable to fermionic single- orbital lattice models in dimensions D = 1, 2, 3 and 4. The method is built on a dynamical mean field starting point, which neglects all local correlations, and perturbatively adds spatial correlations. Our code is distributed as an open-source package under the GNU public license version 2.

  14. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  15. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  16. Multi-channel Analysis of Passive Surface Waves (MAPS)

    NASA Astrophysics Data System (ADS)

    Xia, J.; Cheng, F. Mr; Xu, Z.; Wang, L.; Shen, C.; Liu, R.; Pan, Y.; Mi, B.; Hu, Y.

    2017-12-01

    Urbanization is an inevitable trend in modernization of human society. In the end of 2013 the Chinese Central Government launched a national urbanization plan—"Three 100 Million People", which aggressively and steadily pushes forward urbanization. Based on the plan, by 2020, approximately 100 million people from rural areas will permanently settle in towns, dwelling conditions of about 100 million people in towns and villages will be improved, and about 100 million people in the central and western China will permanently settle in towns. China's urbanization process will run at the highest speed in the urbanization history of China. Environmentally friendly, non-destructive and non-invasive geophysical assessment method has played an important role in the urbanization process in China. Because human noise and electromagnetic field due to industrial life, geophysical methods already used in urban environments (gravity, magnetics, electricity, seismic) face great challenges. But humanity activity provides an effective source of passive seismic methods. Claerbout pointed out that wavefileds that are received at one point with excitation at the other point can be reconstructed by calculating the cross-correlation of noise records at two surface points. Based on this idea (cross-correlation of two noise records) and the virtual source method, we proposed Multi-channel Analysis of Passive Surface Waves (MAPS). MAPS mainly uses traffic noise recorded with a linear receiver array. Because Multi-channel Analysis of Surface Waves can produces a shear (S) wave velocity model with high resolution in shallow part of the model, MPAS combines acquisition and processing of active source and passive source data in a same flow, which does not require to distinguish them. MAPS is also of ability of real-time quality control of noise recording that is important for near-surface applications in urban environment. The numerical and real-world examples demonstrated that MAPS can be used for accurate and fast imaging of high-frequency surface wave energy, and some examples also show that high quality imaging similar to those with active sources can be generated only by the use of a few minutes of noise. The use of cultural noise in town, MAPS can image S-wave velocity structure from the ground surface to hundreds of meters depth.

  17. 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturgeon, Richard W.

    This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less

  18. METHOD OF PREPARING RADIOACTIVE CESIUM SOURCES

    DOEpatents

    Quinby, T.C.

    1963-12-17

    A method of preparing a cesium-containing radiation source with physical and chemical properties suitable for high-level use is presented. Finely divided silica is suspended in a solution containing cesium, normally the fission-product isotope cesium 137. Sodium tetraphenyl boron is then added to quantitatively precipitate the cesium. The cesium-containing precipitate is converted to borosilicate glass by heating to the melting point and cooling. Up to 60 weight percent cesium, with a resulting source activity of up to 21 curies per gram, is incorporated in the glass. (AEC)

  19. Störmer method for a problem of point injection of charged particles into a magnetic dipole field

    NASA Astrophysics Data System (ADS)

    Kolesnikov, E. K.

    2017-03-01

    The problem of point injection of charged particles into a magnetic dipole field was considered. Analytical expressions were obtained by the Störmer method for regions of allowed pulses of charged particles at random points of a dipole field at a set position of the point source of particles. It was found that, for a fixed location of the studied point, there was a specific structure of the coordinate space in the form of a set of seven regions, where the injector location in each region corresponded to a definite form of an allowed pulse region at the studied point. It was shown that the allowed region boundaries in four of the mentioned regions were surfaces of conic section revolution.

  20. A spatial model to aggregate point-source and nonpoint-source water-quality data for large areas

    USGS Publications Warehouse

    White, D.A.; Smith, R.A.; Price, C.V.; Alexander, R.B.; Robinson, K.W.

    1992-01-01

    More objective and consistent methods are needed to assess water quality for large areas. A spatial model, one that capitalizes on the topologic relationships among spatial entities, to aggregate pollution sources from upstream drainage areas is described that can be implemented on land surfaces having heterogeneous water-pollution effects. An infrastructure of stream networks and drainage basins, derived from 1:250,000-scale digital-elevation models, define the hydrologic system in this spatial model. The spatial relationships between point- and nonpoint pollution sources and measurement locations are referenced to the hydrologic infrastructure with the aid of a geographic information system. A maximum-branching algorithm has been developed to simulate the effects of distance from a pollutant source to an arbitrary downstream location, a function traditionally employed in deterministic water quality models. ?? 1992.

  1. Apportioning riverine DIN load to export coefficients of land uses in an urbanized watershed.

    PubMed

    Shih, Yu-Ting; Lee, Tsung-Yu; Huang, Jr-Chuan; Kao, Shuh-Ji; Chang

    2016-08-01

    The apportionment of riverine dissolved inorganic nitrogen (DIN) load to individual land use on a watershed scale demands the support of accurate DIN load estimation and differentiation of point and non-point sources, but both of them are rarely quantitatively determined in small montane watersheds. We introduced the Danshui River watershed of Taiwan, a mountainous urbanized watershed, to determine the export coefficients via a reverse Monte Carlo approach from riverine DIN load. The results showed that the dynamics of N fluctuation determines the load estimation method and sampling frequency. On a monthly sampling frequency basis, the average load estimation of the methods (GM, FW, and LI) outperformed that of individual method. Export coefficient analysis showed that the forest DIN yield of 521.5kg-Nkm(-2)yr(-1) was ~2.7-fold higher than the global riverine DIN yield (mainly from temperate large rivers with various land use compositions). Such a high yield was attributable to high rainfall and atmospheric N deposition. The export coefficient of agriculture was disproportionately larger than forest suggesting that a small replacement of forest to agriculture could lead to considerable change of DIN load. The analysis of differentiation between point and non-point sources showed that the untreated wastewater (non-point source), accounting for ~93% of the total human-associated wastewater, resulted in a high export coefficient of urban. The inclusion of the treated and untreated wastewater completes the N budget of wastewater. The export coefficient approach serves well to assess the riverine DIN load and to improve the understanding of N cascade. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Method for high specific bioproductivity of .alpha.,.omega.-alkanedicarboxylic acids

    DOEpatents

    Mobley, David Paul; Shank, Gary Keith

    2000-01-01

    This invention provides a low-cost method of producing .alpha.,.omega.-alkanedicarboxylic acids. Particular bioconversion conditions result in highly efficient conversion of fatty acid, fatty acid ester, or alkane substrates to diacids. Candida tropicalis AR40 or similar yeast strains are grown in a medium containing a carbon source and a nitrogen source at a temperature of 31.degree. C. to 38.degree. C., while additional carbon source is continuously added, until maximum cell growth is attained. Within 0-3 hours of this point, substrate is added to the culture to initiate conversion. An .alpha.,.omega.-alkanedicarboxylic acid made according to this method is also provided.

  3. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    USGS Publications Warehouse

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  4. MOLECULAR EVALUATION OF CHANGES IN PLANKTONIC BACTERIAL POPULATIONS RESULTING FROM EQUINE FECAL CONTAMINATION IN A SUB-WATERSHED

    EPA Science Inventory

    Considerable emphasis has been placed on developing watershed-based strategies with the potential to reduce non-point-source fecal contamination. Molecular methods applied used 16S-ribosomal-deoxyribonucleic-acid (rDNA) to try to determine sources of fecal contamination. Objectiv...

  5. A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate

    NASA Astrophysics Data System (ADS)

    Lin, Ruixing; Xu, Lin; Zheng, Xian

    2018-03-01

    Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.

  6. Evaluation of Rock Surface Characterization by Means of Temperature Distribution

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.

    2017-12-01

    Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.

  7. The Herschel-SPIRE Point Source Catalog Version 2

    NASA Astrophysics Data System (ADS)

    Schulz, Bernhard; Marton, Gábor; Valtchanov, Ivan; María Pérez García, Ana; Pintér, Sándor; Appleton, Phil; Kiss, Csaba; Lim, Tanya; Lu, Nanyao; Papageorgiou, Andreas; Pearson, Chris; Rector, John; Sánchez Portal, Miguel; Shupe, David; Tóth, Viktor L.; Van Dyk, Schuyler; Varga-Verebélyi, Erika; Xu, Kevin

    2018-01-01

    The Herschel-SPIRE instrument mapped about 8% of the sky in Submillimeter broad-band filters centered at 250, 350, and 500 microns (1199, 857, 600 GHz) with spatial resolutions of 17.9”, 24.2”, and 35.4” respectively. We present here the 2nd version of the SPIRE Point Source Catalog (SPSC). Stacking on WISE 22 micron catalog sources led to the identification of 108 maps, out of 6878, that had astrometry offsets of greater than 5”. After fixing these deviations and re-derivation of all affected map-mosaics, we repeated the systematic and homogeneous source extraction performed on all maps, using an improved version of the 4 different photometry extraction methods that were already employed in the generation of the first version catalog. Only regions affected by strong Galactic emission, mostly in the Galactic Plane, were excluded, as they exceeded the limits of the available source extraction methods. Aimed primarily at point sources, that allow for the best photometric accuracy, the catalog contains also significant fractions of slightly extended sources. With most SPIRE maps being confusion limited, uncertainties in flux densities were established as a function of structure noise and flux density, based on the results of artificial source insertion experiments into real data along a range of celestial backgrounds. Many sources have been rejected that do not pass the imposed SNR threshold, especially at flux densities approaching the extragalactic confusion limit. A range of additional flags provide information on the reliability of the flux information, as well as the spatial extent and orientation of a source. The catalog should be particularly helpful for determining cold dust content in extragalactic and galactic sources with low to moderate background confusion. We present an overview of catalog construction, detailed content, and validation results, with focus on the improvements achieved in the second version that is soon to be released.

  8. Self-Similar Spin Images for Point Cloud Matching

    NASA Astrophysics Data System (ADS)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  9. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting.

    PubMed

    Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F

    2010-07-19

    A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic.

  10. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting

    PubMed Central

    2010-01-01

    Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827

  11. Discretized energy minimization in a wave guide with point sources

    NASA Technical Reports Server (NTRS)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  12. Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe

    NASA Technical Reports Server (NTRS)

    Isaacson, Jeffrey A.; Canizares, Claude R.

    1989-01-01

    Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.

  13. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J. C.; Baillet, S.; Jerbi, K.

    2001-01-01

    We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the proceduremore » is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.« less

  14. Continuous wavelet transform analysis and modal location analysis acoustic emission source location for nuclear piping crack growth monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohd, Shukri; Holford, Karen M.; Pullin, Rhys

    2014-02-12

    Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup usingmore » H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.« less

  15. Tailoring Retention Theories to Meet the Needs of Rural Appalachian Community College Students

    ERIC Educational Resources Information Center

    Hlinka, Karen R.

    2017-01-01

    Objective: Traditional-age students attending a rural community college in Kentucky's Appalachian region were interviewed, along with faculty members and administrators, to identify phenomena serving as sources of encouragement or as barriers to retention from the point of entry to the point of transfer. Method: Students' perspectives were…

  16. Application of the matrix exponential kernel

    NASA Technical Reports Server (NTRS)

    Rohach, A. F.

    1972-01-01

    A point matrix kernel for radiation transport, developed by the transmission matrix method, has been used to develop buildup factors and energy spectra through slab layers of different materials for a point isotropic source. Combinations of lead-water slabs were chosen for examples because of the extreme differences in shielding properties of these two materials.

  17. A comparison of cover calculation techniques for relating point-intercept vegetation sampling to remote sensing imagery

    USDA-ARS?s Scientific Manuscript database

    Accurate and timely spatial predictions of vegetation cover from remote imagery are an important data source for natural resource management. High-quality in situ data are needed to develop and validate these products. Point-intercept sampling techniques are a common method for obtaining quantitativ...

  18. Assessment of ambient background concentrations of elements in soil using combined survey and open-source data.

    PubMed

    Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M

    2017-02-15

    Understanding ambient background concentrations in soil, at a local scale, is an essential part of environmental risk assessment. Where high resolution geochemical soil surveys have not been undertaken, soil data from alternative sources, such as environmental site assessment reports, can be used to support an understanding of ambient background conditions. Concentrations of metals/metalloids (As, Mn, Ni, Pb and Zn) were extracted from open-source environmental site assessment reports, for soils derived from the Newer Volcanics basalt, of Melbourne, Victoria, Australia. A manual screening method was applied to remove samples that were indicated to be contaminated by point sources and hence not representative of ambient background conditions. The manual screening approach was validated by comparison to data from a targeted background soil survey. Statistical methods for exclusion of contaminated samples from background soil datasets were compared to the manual screening method. The statistical methods tested included the Median plus Two Median Absolute Deviations, the upper whisker of a normal and log transformed Tukey boxplot, the point of inflection on a cumulative frequency plot and the 95th percentile. We have demonstrated that where anomalous sample results cannot be screened using site information, the Median plus Two Median Absolute Deviations is a conservative method for derivation of ambient background upper concentration limits (i.e. expected maximums). The upper whisker of a boxplot and the point of inflection on a cumulative frequency plot, were also considered adequate methods for deriving ambient background upper concentration limits, where the percentage of contaminated samples is <25%. Median ambient background concentrations of metals/metalloids in the Newer Volcanic soils of Melbourne were comparable to ambient background concentrations in Europe and the United States, except for Ni, which was naturally enriched in the basalt-derived soils of Melbourne. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Time-integrated passive sampling as a complement to conventional point-in-time sampling for investigating drinking-water quality, McKenzie River Basin, Oregon, 2007 and 2010-11

    USGS Publications Warehouse

    McCarthy, Kathleen A.; Alvarez, David A.

    2014-01-01

    The Eugene Water & Electric Board (EWEB) supplies drinking water to approximately 200,000 people in Eugene, Oregon. The sole source of this water is the McKenzie River, which has consistently excellent water quality relative to established drinking-water standards. To ensure that this quality is maintained as land use in the source basin changes and water demands increase, EWEB has developed a proactive management strategy that includes a combination of conventional point-in-time discrete water sampling and time‑integrated passive sampling with a combination of chemical analyses and bioassays to explore water quality and identify where vulnerabilities may lie. In this report, we present the results from six passive‑sampling deployments at six sites in the basin, including the intake and outflow from the EWEB drinking‑water treatment plant (DWTP). This is the first known use of passive samplers to investigate both the source and finished water of a municipal DWTP. Results indicate that low concentrations of several polycyclic aromatic hydrocarbons and organohalogen compounds are consistently present in source waters, and that many of these compounds are also present in finished drinking water. The nature and patterns of compounds detected suggest that land-surface runoff and atmospheric deposition act as ongoing sources of polycyclic aromatic hydrocarbons, some currently used pesticides, and several legacy organochlorine pesticides. Comparison of results from point-in-time and time-integrated sampling indicate that these two methods are complementary and, when used together, provide a clearer understanding of contaminant sources than either method alone.

  20. A stereotaxic method of recording from single neurons in the intact in vivo eye of the cat.

    PubMed

    Molenaar, J; Van de Grind, W A

    1980-04-01

    A method is described for recording stereotaxically from single retinal neurons in the optically intact in vivo eye of the cat. The method is implemented with the help of a new type of stereotaxic instrument and a specially developed stereotaxic atlas of the cat's eye and retina. The instrument is extremely stable and facilitates intracellular recording from retinal neurons. The microelectrode can be rotated about two mutually perpendicular axes, which intersect in the freely positionable pivot point of the electrode manipulation system. When the pivot point is made to coincide with a small electrode-entrance hole in the sclera of the eye, a large retinal region can be reached through this fixed hole in the immobilized eye. The stereotaxic method makes it possible to choose a target point on the presented eye atlas and predict the settings of the instrument necessary to reach this target. This method also includes the prediction of the corresponding light stimulus position on a tangent screen and the calculation of the projection of the recording electrode on this screen. The sources of error in the method were studied experimentally and a numerical perturbation analysis was carried out to study the influence of each of the sources of error on the final result. The overall accuracy of the method is of the order of 5 degrees of visual angle, which will be sufficient for most purposes.

  1. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  2. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  3. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE PAGES

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-06-13

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  4. Optimization of the transition path of the head hardening with using the genetic algorithms

    NASA Astrophysics Data System (ADS)

    Wróbel, Joanna; Kulawik, Adam

    2016-06-01

    An automated method of choice of the transition path of the head hardening in heat treatment process for the plane steel element is proposed in this communication. This method determines the points on the path of moving heat source using the genetic algorithms. The fitness function of the used algorithm is determined on the basis of effective stresses and yield point depending on the phase composition. The path of the hardening tool and also the area of the heat affected zone is determined on the basis of obtained points. A numerical model of thermal phenomena, phase transformations in the solid state and mechanical phenomena for the hardening process is implemented in order to verify the presented method. A finite element method (FEM) was used for solving the heat transfer equation and getting required temperature fields. The moving heat source is modeled with a Gaussian distribution and the water cooling is also included. The macroscopic model based on the analysis of the CCT and CHT diagrams of the medium-carbon steel is used to determine the phase transformations in the solid state. A finite element method is also used for solving the equilibrium equations giving us the stress field. The thermal and structural strains are taken into account in the constitutive relations.

  5. Using vadose zone data and spatial statistics to assess the impact of cultivated land and dairy waste lagoons on groundwater contamination

    NASA Astrophysics Data System (ADS)

    Baram, S.; Ronen, Z.; Kurtzman, D.; Peeters, A.; Dahan, O.

    2013-12-01

    Land cultivation and dairy waste lagoons are considered to be nonpoint and point sources of groundwater contamination by chloride (Cl-) and nitrate (NO3-). The objective of this work is to introduce a methodology to assess the past and future impacts of such agricultural activities on regional groundwater quality. The method is based on mass balances and on spatial statistical analysis of Cl- and NO3-concentration distributions in the saturated and unsaturated zones. The method enables quantitative analysis of the relation between the locations of pollution point sources and the spatial variability in Cl- and NO3- concentrations in groundwater. The method was applied to the Beer-Tuvia region, Israel, where intensive dairy farming along with land cultivation has been practiced for over 50 years above the local phreatic aquifer. Mass balance calculations accounted for the various groundwater recharge and abstraction sources and sinks in the entire region. The mass balances showed that leachates from lagoons and the cultivated land have contributed 6.0 and 89.4 % of the total mass of Cl- added to the aquifer and 12.6 and 77.4 % of the total mass of NO3-. The chemical composition of the aquifer and vadose zone water suggested that irrigated agricultural activity in the region is the main contributor of Cl- and NO3- to the groundwater. A low spatial correlation between the Cl- and NO3- concentrations in the groundwater and the on-land location of the dairy farms strengthened this assumption, despite the dairy waste lagoon being a point source for groundwater contamination by Cl- and NO3-. Results demonstrate that analyzing vadose zone and groundwater data by spatial statistical analysis methods can significantly contribute to the understanding of the relations between groundwater contaminating sources, and to assessing appropriate remediation steps.

  6. DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.

    PubMed

    Chen, Zhuo; Luo, Yi; Mesgarani, Nima

    2017-03-01

    Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.

  7. RADIOISOTOPES USED IN PHARMACY. 5. IONIZING RADIATION IN PHARMACEUTICAL ANALYSIS (in Danish)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kristensen, K.

    1962-09-01

    The use of radioisotope methods for analyzing drugs is reviewed. It is pointed out that heretofore most methods have been based on isotope dilution principles whereas in the future radioactivation analysis, especially with neutron sources, offers great possibilities. (BBB)

  8. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  9. Role-based access control permissions

    DOEpatents

    Staggs, Kevin P.; Markham, Thomas R.; Hull Roskos, Julie J.; Chernoguzov, Alexander

    2017-04-25

    Devices, systems, and methods for role-based access control permissions are disclosed. One method includes a policy decision point that receives up-to-date security context information from one or more outside sources to determine whether to grant access for a data client to a portion of the system and creates an access vector including the determination; receiving, via a policy agent, a request by the data client for access to the portion of the computing system by the data client, wherein the policy agent checks to ensure there is a session established with communications and user/application enforcement points; receiving, via communications policy enforcement point, the request from the policy agent, wherein the communications policy enforcement point determines whether the data client is an authorized node, based upon the access vector received from the policy decision point; and receiving, via the user/application policy enforcement point, the request from the communications policy enforcement point.

  10. High spatial resolution detection of low-energy electrons using an event-counting method, application to point projection microscopy

    NASA Astrophysics Data System (ADS)

    Salançon, Evelyne; Degiovanni, Alain; Lapena, Laurent; Morin, Roger

    2018-04-01

    An event-counting method using a two-microchannel plate stack in a low-energy electron point projection microscope is implemented. 15 μm detector spatial resolution, i.e., the distance between first-neighbor microchannels, is demonstrated. This leads to a 7 times better microscope resolution. Compared to previous work with neutrons [Tremsin et al., Nucl. Instrum. Methods Phys. Res., Sect. A 592, 374 (2008)], the large number of detection events achieved with electrons shows that the local response of the detector is mainly governed by the angle between the hexagonal structures of the two microchannel plates. Using this method in point projection microscopy offers the prospect of working with a greater source-object distance (350 nm instead of 50 nm), advancing toward atomic resolution.

  11. Effect of an overhead shield on gamma-ray skyshine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stedry, M.H.; Shultis, J.K.; Faw, R.E.

    1996-06-01

    A hybrid Monte Carlo and integral line-beam method is used to determine the effect of a horizontal slab shield above a gamma-ray source on the resulting skyshine doses. A simplified Monte Carlo procedure is used to determine the energy and angular distribution of photons escaping the source shield into the atmosphere. The escaping photons are then treated as a bare, point, skyshine source, and the integral line-beam method is used to estimate the skyshine dose at various distances from the source. From results for arbitrarily collimated and shielded sources, the skyshine dose is found to depend primarily on the mean-free-pathmore » thickness of the shield and only very weakly on the shield material.« less

  12. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    PubMed Central

    Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu

    2017-01-01

    The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979

  13. Isolating intrinsic noise sources in a stochastic genetic switch.

    PubMed

    Newby, Jay M

    2012-01-01

    The stochastic mutual repressor model is analysed using perturbation methods. This simple model of a gene circuit consists of two genes and three promotor states. Either of the two protein products can dimerize, forming a repressor molecule that binds to the promotor of the other gene. When the repressor is bound to a promotor, the corresponding gene is not transcribed and no protein is produced. Either one of the promotors can be repressed at any given time or both can be unrepressed, leaving three possible promotor states. This model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle, and the case of small noise is considered. On small timescales, the stochastic process fluctuates near one of the stable fixed points, and on large timescales, a metastable transition can occur, where fluctuations drive the system past the unstable saddle to the other stable fixed point. To explore how different intrinsic noise sources affect these transitions, fluctuations in protein production and degradation are eliminated, leaving fluctuations in the promotor state as the only source of noise in the system. The process without protein noise is then compared to the process with weak protein noise using perturbation methods and Monte Carlo simulations. It is found that some significant differences in the random process emerge when the intrinsic noise source is removed.

  14. Program VSAERO theory document: A computer program for calculating nonlinear aerodynamic characteristics of arbitrary configurations

    NASA Technical Reports Server (NTRS)

    Maskew, Brian

    1987-01-01

    The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.

  15. Exact solutions for sound radiation from a moving monopole above an impedance plane.

    PubMed

    Ochmann, Martin

    2013-04-01

    The acoustic field of a monopole source moving with constant velocity at constant height above an infinite locally reacting plane can be expressed in analytical form by combining the Lorentz transformation with the method of superimposing complex or real point sources. For a plane with masslike response, the solution in Lorentz space consists of a superposition of monopoles only and therefore, does not differ in principle from the solution for the corresponding stationary boundary value problem. However, by considering a frequency independent surface impedance, e.g., with pure absorbing behavior, the half-space Green's function is now comprised of not only a line of monopoles but also of dipoles. For certain field points at a special line g, this solution can be written explicitly by using an exponential integral. For arbitrary field points, the method of stationary phase leads to an asymptotic solution for the reflection coefficient which agrees with prior results from the literature.

  16. Dynamic analysis of ultrasonically levitated droplet with moving particle semi-implicit and distributed point source method

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Yuge, Kohei; Nakamura, Ryohei; Tanaka, Hiroki; Nakamura, Kentaro

    2015-07-01

    Numerical analysis of an ultrasonically levitated droplet with a free surface boundary is discussed. The droplet is known to change its shape from sphere to spheroid when it is suspended in a standing wave owing to the acoustic radiation force. However, few studies on numerical simulation have been reported in association with this phenomenon including fluid dynamics inside the droplet. In this paper, coupled analysis using the distributed point source method (DPSM) and the moving particle semi-implicit (MPS) method, both of which do not require grids or meshes to handle the moving boundary with ease, is suggested. A droplet levitated in a plane standing wave field between a piston-vibrating ultrasonic transducer and a reflector is simulated with the DPSM-MPS coupled method. The dynamic change in the spheroidal shape of the droplet is successfully reproduced numerically, and the gravitational center and the change in the spheroidal aspect ratio are discussed and compared with the previous literature.

  17. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  18. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  19. Apparatus for and method of performing spectroscopic analysis on an article

    DOEpatents

    Powell, George Louis; Hallman, Jr., Russell Louis

    1999-01-01

    An apparatus for and method of analyzing an article having an entrance and an exit in communication with the entrance. The apparatus comprises: a spectrometer having an emission source with a focal point; a plurality of mirrors; and a detector connected to the spectroscope. The emission source is positioned so that its focal point is substantially coextensive with the entrance of the article. The mirrors comprise: a first mirror positionable adjacent the exit of the article and a second mirror positioned relative to the other of said plurality of mirrors. The first mirror receives scattered emissions exiting the article and substantially collimates the scattered emissions. The second mirror substantially focuses the collimated emissions into a focused emission. The detector receives the focused emission from the mirrors.

  20. Apparatus for and method of performing spectroscopic analysis on an article

    DOEpatents

    Powell, G.L.; Hallman, R.L. Jr.

    1999-04-20

    An apparatus and method are disclosed for analyzing an article having an entrance and an exit in communication with the entrance. The apparatus comprises: a spectrometer having an emission source with a focal point; a plurality of mirrors; and a detector connected to the spectroscope. The emission source is positioned so that its focal point is substantially coextensive with the entrance of the article. The mirrors comprise: a first mirror positionable adjacent the exit of the article and a second mirror positioned relative to the other of said plurality of mirrors. The first mirror receives scattered emissions exiting the article and substantially collimates the scattered emissions. The second mirror substantially focuses the collimated emissions into a focused emission. The detector receives the focused emission from the mirrors. 6 figs.

  1. Determination of Jet Noise Radiation Source Locations using a Dual Sideline Cross-Correlation/Spectrum Technique

    NASA Technical Reports Server (NTRS)

    Allen, C. S.; Jaeger, S. M.

    1999-01-01

    The goal of our efforts is to extrapolate nearfield jet noise measurements to the geometric far field where the jet noise sources appear to radiate from a single point. To accomplish this, information about the location of noise sources in the jet plume, the radiation patterns of the noise sources and the sound pressure level distribution of the radiated field must be obtained. Since source locations and radiation patterns can not be found with simple single microphone measurements, a more complicated method must be used.

  2. Combination of ray-tracing and the method of moments for electromagnetic radiation analysis using reduced meshes

    NASA Astrophysics Data System (ADS)

    Delgado, Carlos; Cátedra, Manuel Felipe

    2018-05-01

    This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.

  3. The effect of barriers on wave propagation phenomena: With application for aircraft noise shielding

    NASA Technical Reports Server (NTRS)

    Mgana, C. V. M.; Chang, I. D.

    1982-01-01

    The frequency spectrum was divided into high and low frequency regimes and two separate methods were developed and applied to account for physical factors associated with flight conditions. For long wave propagation, the acoustic filed due to a point source near a solid obstacle was treated in terms of an inner region which where the fluid motion is essentially incompressible, and an outer region which is a linear acoustic field generated by hydrodynamic disturbances in the inner region. This method was applied to a case of a finite slotted plate modelled to represent a wing extended flap for both stationary and moving media. Ray acoustics, the Kirchhoff integral formulation, and the stationary phase approximation were combined to study short wave length propagation in many limiting cases as well as in the case of a semi-infinite plate in a uniform flow velocity with a point source above the plate and embedded in a different flow velocity to simulate an engine exhaust jet stream surrounding the source.

  4. STARBLADE: STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission

    NASA Astrophysics Data System (ADS)

    Knollmüller, Jakob; Frank, Philipp; Ensslin, Torsten A.

    2018-05-01

    STARBLADE (STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission) separates superimposed point-like sources from a diffuse background by imposing physically motivated models as prior knowledge. The algorithm can also be used on noisy and convolved data, though performing a proper reconstruction including a deconvolution prior to the application of the algorithm is advised; the algorithm could also be used within a denoising imaging method. STARBLADE learns the correlation structure of the diffuse emission and takes it into account to determine the occurrence and strength of a superimposed point source.

  5. A study on the evaporation process with multiple point-sources

    NASA Astrophysics Data System (ADS)

    Jun, Sunghoon; Kim, Minseok; Kim, Suk Han; Lee, Moon Yong; Lee, Eung Ki

    2013-10-01

    In Organic Light Emitting Display (OLED) manufacturing processes, there is a need to enlarge the mother glass substrate to raise its productivity and enable OLED TV. The larger the size of the glass substrate, the more difficult it is to establish a uniform thickness profile of the organic thin-film layer in the vacuum evaporation process. In this paper, a multiple point-source evaporation process is proposed to deposit a uniform organic layer uniformly. Using this method, a uniformity of 3.75% was achieved along a 1,300 mm length of Gen. 5.5 glass substrate (1300 × 1500 mm2).

  6. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  7. Inverse identification of unknown finite-duration air pollutant release from a point source in urban environment

    NASA Astrophysics Data System (ADS)

    Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.

    2018-05-01

    In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.

  8. Acoustic radiation from the submerged circular cylindrical shell treated with active constrained layer damping

    NASA Astrophysics Data System (ADS)

    Yuan, Li-Yun; Xiang, Yu; Lu, Jing; Jiang, Hong-Hua

    2015-12-01

    Based on the transfer matrix method of exploring the circular cylindrical shell treated with active constrained layer damping (i.e., ACLD), combined with the analytical solution of the Helmholtz equation for a point source, a multi-point multipole virtual source simulation method is for the first time proposed for solving the acoustic radiation problem of a submerged ACLD shell. This approach, wherein some virtual point sources are assumed to be evenly distributed on the axial line of the cylindrical shell, and the sound pressure could be written in the form of the sum of the wave functions series with the undetermined coefficients, is demonstrated to be accurate to achieve the radiation acoustic pressure of the pulsating and oscillating spheres respectively. Meanwhile, this approach is proved to be accurate to obtain the radiation acoustic pressure for a stiffened cylindrical shell. Then, the chosen number of the virtual distributed point sources and truncated number of the wave functions series are discussed to achieve the approximate radiation acoustic pressure of an ACLD cylindrical shell. Applying this method, different radiation acoustic pressures of a submerged ACLD cylindrical shell with different boundary conditions, different thickness values of viscoelastic and piezoelectric layer, different feedback gains for the piezoelectric layer and coverage of ACLD are discussed in detail. Results show that a thicker thickness and larger velocity gain for the piezoelectric layer and larger coverage of the ACLD layer can obtain a better damping effect for the whole structure in general. Whereas, laying a thicker viscoelastic layer is not always a better treatment to achieve a better acoustic characteristic. Project supported by the National Natural Science Foundation of China (Grant Nos. 11162001, 11502056, and 51105083), the Natural Science Foundation of Guangxi Zhuang Autonomous Region, China (Grant No. 2012GXNSFAA053207), the Doctor Foundation of Guangxi University of Science and Technology, China (Grant No. 12Z09), and the Development Project of the Key Laboratory of Guangxi Zhuang Autonomous Region, China (Grant No. 1404544).

  9. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    NASA Astrophysics Data System (ADS)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  10. Assimilating Flow Data into Complex Multiple-Point Statistical Facies Models Using Pilot Points Method

    NASA Astrophysics Data System (ADS)

    Ma, W.; Jafarpour, B.

    2017-12-01

    We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  11. Lidar Based Emissions Measurement at the Whole Facility Scale: Method and Error Analysis

    USDA-ARS?s Scientific Manuscript database

    Particulate emissions from agricultural sources vary from dust created by operations and animal movement to the fine secondary particulates generated from ammonia and other emitted gases. The development of reliable facility emission data using point sampling methods designed to characterize regiona...

  12. 40 CFR 408.11 - Specialized definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STANDARDS CANNED AND PRESERVED SEAFOOD PROCESSING POINT SOURCE CATEGORY Farm-Raised Catfish Processing... apply to this subpart. (b) The term oil and grease shall mean those components of a waste water amenable to measurement by the method described in Methods for Chemical Analysis of Water and Wastes, 1971...

  13. The Relationship between Sources and Functions of Social Support and Dimensions of Child- and Parent-Related Stress

    ERIC Educational Resources Information Center

    Guralnick, M. J.; Hammond, M. A.; Neville, B.; Connor, R. T.

    2008-01-01

    Background: In this longitudinal study, we examined the relationship between the sources and functions of social support and dimensions of child- and parent-related stress for mothers of young children with mild developmental delays. Methods: Sixty-three mothers completed assessments of stress and support at two time points. Results: Multiple…

  14. Issues and Methods Concerning the Evaluation of Hypersingular and Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Khayat, M. A.; Wilton, D. R.

    2005-01-01

    It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.

  15. Investigation on the pinch point position in heat exchangers

    NASA Astrophysics Data System (ADS)

    Pan, Lisheng; Shi, Weixiu

    2016-06-01

    The pinch point is important for analyzing heat transfer in thermodynamic cycles. With the aim to reveal the importance of determining the accurate pinch point, the research on the pinch point position is carried out by theoretical method. The results show that the pinch point position depends on the parameters of the heat transfer fluids and the major fluid properties. In most cases, the pinch point locates at the bubble point for the evaporator and the dew point for the condenser. However, the pinch point shifts to the supercooled liquid state in the near critical conditions for the evaporator. Similarly, it shifts to the superheated vapor state with the condensing temperature approaching the critical temperature for the condenser. It even can shift to the working fluid entrance of the evaporator or the supercritical heater when the heat source fluid temperature is very high compared with the absorbing heat temperature. A wrong position for the pinch point may generate serious mistake. In brief, the pinch point should be founded by the iterative method in all conditions rather than taking for granted.

  16. 40 CFR 435.11 - Specialized definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...

  17. 40 CFR 435.11 - Specialized definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...

  18. 40 CFR 435.11 - Specialized definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...

  19. Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: A Comparative Analysis

    PubMed Central

    Hopkins, Richard S; Cook, Robert L; Striley, Catherine W

    2016-01-01

    Background Traditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods. Objective The objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter. Methods Publicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC’s change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package “bcp” version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia. Results During the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%. Conclusions Of the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed. PMID:27765731

  20. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  1. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  2. [Runoff Pollution Experiments of Paddy Fields Under Different Irrigation Patterns].

    PubMed

    Zhou, Jing-wen; Su, Bao-lin; Huang, Ning-bo; Guan, Yu-tang; Zhao, Kun

    2016-03-15

    To study runoff and non-point source pollution of paddy fields and to provide a scientific basis for agricultural water management of paddy fields, paddy plots in the Jintan City and the Liyang City were chosen for experiments on non-point source pollution, and flood irrigation and intermittent irrigation patterns were adopted in this research. The surface water level and rainfall were observed during the growing season of paddies, and the runoff amount from paddy plots and loads of total nitrogen (TN) and total phosphorus (TP) were calculated by different methods. The results showed that only five rain events of totally 27 rainfalls and one artificially drainage formed non-point source pollution from flood irrigated paddy plot, which resulted in a TN export coefficient of 49.4 kg · hm⁻² and a TP export coefficient of 1.0 kg · hm⁻². No any runoff event occurred from the paddy plot with intermittent irrigation even in the case of maximum rainfall of 95.1 mm. Runoff from paddy fields was affected by water demands of paddies and irrigation or drainage management, which was directly correlated to surface water level, rainfall amount and the lowest ridge height of outlets. Compared with the flood irrigation, intermittent irrigation could significantly reduce non-point source pollution caused by rainfall or artificial drainage.

  3. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.

    PubMed

    Chagren, S; Tekaya, M Ben; Reguigui, N; Gharbi, F

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Improvement and performance evaluation of the perturbation source method for an exact Monte Carlo perturbation calculation in fixed source problems

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hiroki; Yamamoto, Toshihiro

    2017-09-01

    This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.

  5. Method of Making Large Area Nanostructures

    NASA Technical Reports Server (NTRS)

    Marks, Alvin M.

    1995-01-01

    A method which enables the high speed formation of nanostructures on large area surfaces is described. The method uses a super sub-micron beam writer (Supersebter). The Supersebter uses a large area multi-electrode (Spindt type emitter source) to produce multiple electron beams simultaneously scanned to form a pattern on a surface in an electron beam writer. A 100,000 x 100,000 array of electron point sources, demagnified in a long electron beam writer to simultaneously produce 10 billion nano-patterns on a 1 meter squared surface by multi-electron beam impact on a 1 cm squared surface of an insulating material is proposed.

  6. Analysis of ultrasonically rotating droplet using moving particle semi-implicit and distributed point source methods

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2016-07-01

    Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.

  7. Activity measurements of 55Fe by two different methods

    NASA Astrophysics Data System (ADS)

    da Cruz, Paulo A. L.; Iwahara, Akira; da Silva, Carlos J.; Poledna, Roberto; Loureiro, Jamir S.; da Silva, Monica A. L.; Ruzzarin, Anelise

    2018-03-01

    A calibrated germanium detector and CIEMAT/NIST liquid scintillation method were used in the standardization of solution of 55Fe coming from a key-comparison BIPM. Commercial cocktails were used in source preparation for activity measurements in CIEMAT/NIST method. Measurements were performed in Liquid Scintillation Counter. In the germanium counting method standard point sources were prepared for obtaining atomic number versus efficiency curve of the detector in order to obtain the efficiency of 5.9 keV KX-ray of 55Fe by interpolation. The activity concentrations obtained were 508.17 ± 3.56 and 509.95 ± 16.20 kBq/g for CIEMAT/NIST and germanium methods, respectively.

  8. Optimal Matched Filter in the Low-number Count Poisson Noise Regime and Implications for X-Ray Source Detection

    NASA Astrophysics Data System (ADS)

    Ofek, Eran O.; Zackay, Barak

    2018-04-01

    Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.

  9. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  10. 40 CFR 52.1120 - Identification of plan.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... a new regulation 310 CMR 7.19 “Interim Sulfur-in-Fuel Limitations for Fossil Fuel Utilization... May 22, 1985, including Method 27, record form, potential leak points, major tank truck leak sources...

  11. 40 CFR 52.1120 - Identification of plan.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... a new regulation 310 CMR 7.19 “Interim Sulfur-in-Fuel Limitations for Fossil Fuel Utilization... May 22, 1985, including Method 27, record form, potential leak points, major tank truck leak sources...

  12. 40 CFR 52.1120 - Identification of plan.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... a new regulation 310 CMR 7.19 “Interim Sulfur-in-Fuel Limitations for Fossil Fuel Utilization... May 22, 1985, including Method 27, record form, potential leak points, major tank truck leak sources...

  13. 40 CFR 52.1120 - Identification of plan.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... a new regulation 310 CMR 7.19 “Interim Sulfur-in-Fuel Limitations for Fossil Fuel Utilization... May 22, 1985, including Method 27, record form, potential leak points, major tank truck leak sources...

  14. 40 CFR 52.1120 - Identification of plan.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... a new regulation 310 CMR 7.19 “Interim Sulfur-in-Fuel Limitations for Fossil Fuel Utilization... May 22, 1985, including Method 27, record form, potential leak points, major tank truck leak sources...

  15. Distribution and sources of polyfluoroalkyl substances (PFAS) in the River Rhine watershed.

    PubMed

    Möller, Axel; Ahrens, Lutz; Surm, Renate; Westerveld, Joke; van der Wielen, Frans; Ebinghaus, Ralf; de Voogt, Pim

    2010-10-01

    The concentration profile of 40 polyfluoroalkyl substances (PFAS) in surface water along the River Rhine watershed from the Lake Constance to the North Sea was investigated. The aim of the study was to investigate the influence of point as well as diffuse sources, to estimate fluxes of PFAS into the North Sea and to identify replacement compounds of perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA). In addition, an interlaboratory comparison of the method performance was conducted. The PFAS pattern was dominated by perfluorobutane sulfonate (PFBS) and perfluorobutanoic acid (PFBA) with concentrations up to 181 ng/L and 335 ng/L, respectively, which originated from industrial point sources. Fluxes of SigmaPFAS were estimated to be approximately 6 tonnes/year which is much higher than previous estimations. Both, the River Rhine and the River Scheldt, seem to act as important sources of PFAS into the North Sea. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  16. On singular and highly oscillatory properties of the Green function for ship motions

    NASA Astrophysics Data System (ADS)

    Chen, Xiao-Bo; Xiong Wu, Guo

    2001-10-01

    The Green function used for analysing ship motions in waves is the velocity potential due to a point source pulsating and advancing at a uniform forward speed. The behaviour of this function is investigated, in particular for the case when the source is located at or close to the free surface. In the far field, the Green function is represented by a single integral along one closed dispersion curve and two open dispersion curves. The single integral along the open dispersion curves is analysed based on the asymptotic expansion of a complex error function. The singular and highly oscillatory behaviour of the Green function is captured, which shows that the Green function oscillates with indefinitely increasing amplitude and indefinitely decreasing wavelength, when a field point approaches the track of the source point at the free surface. This sheds some light on the nature of the difficulties in the numerical methods used for predicting the motion of a ship advancing in waves.

  17. Experimental evaluation of the ring focus test for X-ray telescopes using AXAF's technology mirror assembly, MSFC CDDF Project No. H20

    NASA Technical Reports Server (NTRS)

    Zissa, D. E.; Korsch, D.

    1986-01-01

    A test method particularly suited for X-ray telescopes was evaluated experimentally. The method makes use of a focused ring formed by an annular aperture when using a point source at a finite distance. This would supplement measurements of the best focus image which is blurred when the test source is at a finite distance. The telescope used was the Technology Mirror Assembly of the Advanced X-ray Astrophysis Facility (AXAF) program. Observed ring image defects could be related to the azimuthal location of their sources in the telescope even though in this case the predicted sharp ring was obscured by scattering, finite source size, and residual figure errors.

  18. Locating arbitrarily time-dependent sound sources in three dimensional space in real time.

    PubMed

    Wu, Sean F; Zhu, Na

    2010-08-01

    This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.

  19. How to COAAD Images. I. Optimal Source Detection and Photometry of Point Sources Using Ensembles of Images

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.

    2017-02-01

    Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.

  20. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  1. 1SXPS: A Deep Swift X-Ray Telescope Point Source Catalog with Light Curves and Spectra

    NASA Technical Reports Server (NTRS)

    Evans, P. A.; Osborne, J. P.; Beardmore, A. P.; Page, K. L.; Willingale, R.; Mountford, C. J.; Pagani, C.; Burrows, D. N.; Kennea, J. A.; Perri, M.; hide

    2013-01-01

    We present the 1SXPS (Swift-XRT point source) catalog of 151,524 X-ray point sources detected by the Swift-XRT in 8 yr of operation. The catalog covers 1905 sq deg distributed approximately uniformly on the sky. We analyze the data in two ways. First we consider all observations individually, for which we have a typical sensitivity of approximately 3 × 10(exp -13) erg cm(exp -2) s(exp -1) (0.3-10 keV). Then we co-add all data covering the same location on the sky: these images have a typical sensitivity of approximately 9 × 10(exp -14) erg cm(exp -2) s(exp -1) (0.3-10 keV). Our sky coverage is nearly 2.5 times that of 3XMM-DR4, although the catalog is a factor of approximately 1.5 less sensitive. The median position error is 5.5 (90% confidence), including systematics. Our source detection method improves on that used in previous X-ray Telescope (XRT) catalogs and we report greater than 68,000 new X-ray sources. The goals and observing strategy of the Swift satellite allow us to probe source variability on multiple timescales, and we find approximately 30,000 variable objects in our catalog. For every source we give positions, fluxes, time series (in four energy bands and two hardness ratios), estimates of the spectral properties, spectra and spectral fits for the brightest sources, and variability probabilities in multiple energy bands and timescales.

  2. Modeling the 16 September 2015 Chile tsunami source with the inversion of deep-ocean tsunami records by means of the r - solution method

    NASA Astrophysics Data System (ADS)

    Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem

    2017-04-01

    The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.

  3. Parametrization of semiempirical models against ab initio crystal data: evaluation of lattice energies of nitrate salts.

    PubMed

    Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav

    2005-09-01

    A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.

  4. Uncertainty Analyses for Back Projection Methods

    NASA Astrophysics Data System (ADS)

    Zeng, H.; Wei, S.; Wu, W.

    2017-12-01

    So far few comprehensive error analyses for back projection methods have been conducted, although it is evident that high frequency seismic waves can be easily affected by earthquake depth, focal mechanisms and the Earth's 3D structures. Here we perform 1D and 3D synthetic tests for two back projection methods, MUltiple SIgnal Classification (MUSIC) (Meng et al., 2011) and Compressive Sensing (CS) (Yao et al., 2011). We generate synthetics for both point sources and finite rupture sources with different depths, focal mechanisms, as well as 1D and 3D structures in the source region. The 3D synthetics are generated through a hybrid scheme of Direct Solution Method and Spectral Element Method. Then we back project the synthetic data using MUSIC and CS. The synthetic tests show that the depth phases can be back projected as artificial sources both in space and time. For instance, for a source depth of 10km, back projection gives a strong signal 8km away from the true source. Such bias increases with depth, e.g., the error of horizontal location could be larger than 20km for a depth of 40km. If the array is located around the nodal direction of direct P-waves the teleseismic P-waves are dominated by the depth phases. Therefore, back projections are actually imaging the reflection points of depth phases more than the rupture front. Besides depth phases, the strong and long lasted coda waves due to 3D effects near trench can lead to additional complexities tested here. The strength contrast of different frequency contents in the rupture models also produces some variations to the back projection results. In the synthetic tests, MUSIC and CS derive consistent results. While MUSIC is more computationally efficient, CS works better for sparse arrays. In summary, our analyses indicate that the impact of various factors mentioned above should be taken into consideration when interpreting back projection images, before we can use them to infer the earthquake rupture physics.

  5. Phase 3 experiments of the JAERI/USDOE collaborative program on fusion blanket neutronics. Volume 1: Experiment

    NASA Astrophysics Data System (ADS)

    Oyama, Yukio; Konno, Chikara; Ikeda, Yujiro; Maekawa, Fujio; Kosako, Kazuaki; Nakamura, Tomoo; Maekawa, Hiroshi; Youssef, Mahmoud Z.; Kumar, Anil; Abdou, Mohamed A.

    1994-02-01

    A pseudo-line source has been realized by using an accelerator based D-T point neutron source. The pseudo-line source is obtained by time averaging of continuously moving point source or by superposition of finely distributed point sources. The line source is utilized for fusion blanket neutronics experiments with an annular geometry so as to simulate a part of a tokamak reactor. The source neutron characteristics were measured for two operational modes for the line source, continuous and step-wide modes, with the activation foil and the NE213 detectors, respectively. In order to give a source condition for a successive calculational analysis on the annular blanket experiment, the neutron source characteristics was calculated by a Monte Carlo code. The reliability of the Monte Carlo calculation was confirmed by comparison with the measured source characteristics. The shape of the annular blanket system was a rectangular with an inner cavity. The annular blanket was consist of 15 mm-thick first wall (SS304) and 406 mm-thick breeder zone with Li2O at inside and Li2CO3 at outside. The line source was produced at the center of the inner cavity by moving the annular blanket system in the span of 2 m. Three annular blanket configurations were examined; the reference blanket, the blanket covered with 25 mm thick graphite armor and the armor-blanket with a large opening. The neutronics parameters of tritium production rate, neutron spectrum and activation reaction rate were measured with specially developed techniques such as multi-detector data acquisition system, spectrum weighting function method and ramp controlled high voltage system. The present experiment provides unique data for a higher step of benchmark to test a reliability of neutronics design calculation for a realistic tokamak reactor.

  6. [A landscape ecological approach for urban non-point source pollution control].

    PubMed

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  7. Investigating the effects of methodological expertise and data randomness on the robustness of crowd-sourced SfM terrain models

    NASA Astrophysics Data System (ADS)

    Ratner, Jacqueline; Pyle, David; Mather, Tamsin

    2015-04-01

    Structure-from-motion (SfM) techniques are now widely available to quickly and cheaply generate digital terrain models (DTMs) from optical imagery. Topography can change rapidly during disaster scenarios and change the nature of local hazards, making ground-based SfM a particularly useful tool in hazard studies due to its low cost, accessibility, and potential for immediate deployment. Our study is designed to serve as an analogue to potential real-world use of the SfM method if employed for disaster risk reduction purposes. Experiments at a volcanic crater in Santorini, Greece, used crowd-sourced data collection to demonstrate the impact of user expertise and randomization of SfM data on the resultant DTM. Three groups of participants representing variable expertise levels utilized 16 different camera models, including four camera phones, to collect 1001 total photos in one hour of data collection. Datasets collected by each group were processed using the free and open source software VisualSFM. The point densities and overall quality of the resultant SfM point clouds were compared against each other and also against a LiDAR dataset for reference to the industry standard. Our results show that the point clouds are resilient to changes in user expertise and collection method and are comparable or even preferable in data density to LiDAR. We find that 'crowd-sourced' data collected by a moderately informed general public yields topography results comparable to those produced with data collected by experts. This means that in a real-world scenario involving participants with a diverse range of expertise levels, topography models could be produced from crowd-sourced data quite rapidly and to a very high standard. This could be beneficial to disaster risk reduction as a relatively quick, simple, and low-cost method to attain a rapidly updated knowledge of terrain attributes, useful for the prediction and mitigation of many natural hazards.

  8. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  9. Semi-Tomographic Gamma Scanning Technique for Non-Destructive Assay of Radioactive Waste Drums

    NASA Astrophysics Data System (ADS)

    Gu, Weiguo; Rao, Kaiyuan; Wang, Dezhong; Xiong, Jiemei

    2016-12-01

    Segmented gamma scanning (SGS) and tomographic gamma scanning (TGS) are two traditional detection techniques for low and intermediate level radioactive waste drum. This paper proposes one detection method named semi-tomographic gamma scanning (STGS) to avoid the poor detection accuracy of SGS and shorten detection time of TGS. This method and its algorithm synthesize the principles of SGS and TGS. In this method, each segment is divided into annual voxels and tomography is used in the radiation reconstruction. The accuracy of STGS is verified by experiments and simulations simultaneously for the 208 liter standard waste drums which contains three types of nuclides. The cases of point source or multi-point sources, uniform or nonuniform materials are employed for comparison. The results show that STGS exhibits a large improvement in the detection performance, and the reconstruction error and statistical bias are reduced by one quarter to one third or less for most cases if compared with SGS.

  10. Monitoring trends in bird populations: addressing background levels of annual variability in counts

    Treesearch

    Jared Verner; Kathryn L. Purcell; Jennifer G. Turner

    1996-01-01

    Point counting has been widely accepted as a method for monitoring trends in bird populations. Using a rigorously standardized protocol at 210 counting stations at the San Joaquin Experimental Range, Madera Co., California, we have been studying sources of variability in point counts of birds. Vegetation types in the study area have not changed during the 11 years of...

  11. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the 2007 Noto-hanto earthquake, 2008 Iwate-Miyagi earthquake, and 2008 Wenchuan earthquake. The on-going rupture extent can be estimated for all datasets as the rupture propagates. For earthquakes with magnitude about 7.0, the determination of the fault parameters converges to the final geometry within 10 seconds.

  12. A quantitative evaluation of two methods for preserving hair samples

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2003-01-01

    Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.

  13. Ghost Images in Helioseismic Holography? Toy Models in a Uniform Medium

    NASA Astrophysics Data System (ADS)

    Yang, Dan

    2018-02-01

    Helioseismic holography is a powerful technique used to probe the solar interior based on estimations of the 3D wavefield. The Porter-Bojarski holography, which is a well-established method used in acoustics to recover sources and scatterers in 3D, is also an estimation of the wavefield, and hence it has the potential of being applied to helioseismology. Here we present a proof-of-concept study, where we compare helioseismic holography and Porter-Bojarski holography under the assumption that the waves propagate in a homogeneous medium. We consider the problem of locating a point source of wave excitation inside a sphere. Under these assumptions, we find that the two imaging methods have the same capability of locating the source, with the exception that helioseismic holography suffers from "ghost images" ( i.e. artificial peaks away from the source location). We conclude that Porter-Bojarski holography may improve the method currently used in helioseismology.

  14. Attenuation Tomography of Northern California and the Yellow Sea / Korean Peninsula from Coda-source Normalized and Direct Lg Amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S R; Dreger, D S; Phillips, W S

    2008-07-16

    Inversions for regional attenuation (1/Q) of Lg are performed in two different regions. The path attenuation component of the Lg spectrum is isolated using the coda-source normalization method, which corrects the Lg spectral amplitude for the source using the stable, coda-derived source spectra. Tomographic images of Northern California agree well with one-dimensional (1-D) Lg Q estimated from five different methods. We note there is some tendency for tomographic smoothing to increase Q relative to targeted 1-D methods. For example in the San Francisco Bay Area, which contains high attenuation relative to the rest of it's region, Q is over-estimated bymore » {approx}30. Coda-source normalized attenuation tomography is also carried out for the Yellow Sea/Korean Peninsula (YSKP) where output parameters (site, source, and path terms) are compared with those from the amplitude tomography method of Phillips et al. (2005) as well as a new method that ties the source term to the MDAC formulation (Walter and Taylor, 2001). The source terms show similar scatter between coda-source corrected and MDAC source perturbation methods, whereas the amplitude method has the greatest correlation with estimated true source magnitude. The coda-source better represents the source spectra compared to the estimated magnitude and could be the cause of the scatter. The similarity in the source terms between the coda-source and MDAC-linked methods shows that the latter method may approximate the effect of the former, and therefore could be useful in regions without coda-derived sources. The site terms from the MDAC-linked method correlate slightly with global Vs30 measurements. While the coda-source and amplitude ratio methods do not correlate with Vs30 measurements, they do correlate with one another, which provides confidence that the two methods are consistent. The path Q{sup -1} values are very similar between the coda-source and amplitude ratio methods except for small differences in the Da-xin-anling Mountains, in the northern YSKP. However there is one large difference between the MDAC-linked method and the others in the region near stations TJN and INCN, which point to site-effect as the cause for the difference.« less

  15. Deconvolution for three-dimensional acoustic source identification based on spherical harmonics beamforming

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; He, Yansong

    2015-05-01

    Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.

  16. Multi-laboratory evaluations of the performance of Catellicoccus marimammalium PCR assays developed to target gull fecal sources

    EPA Science Inventory

    Here we report results from a multi-laboratory (n=11) evaluation of four different PCR methods targeting the 16S rRNA gene of Catellicoccus marimammalium used to detect fecal contamination from birds in coastal environments. The methods included conventional end-point PCR, a SYBR...

  17. Community shift of biofilms developed in a full-scale drinking water distribution system switching from different water sources.

    PubMed

    Li, Weiying; Wang, Feng; Zhang, Junpeng; Qiao, Yu; Xu, Chen; Liu, Yao; Qian, Lin; Li, Wenming; Dong, Bingzhi

    2016-02-15

    The bacterial community of biofilms in drinking water distribution systems (DWDS) with various water sources has been rarely reported. In this research, biofilms were sampled at three points (A, B, and C) during the river water source phase (phase I), the interim period (phase II) and the reservoir water source phase (phase III), and the biofilm community was determined using the 454-pyrosequencing method. Results showed that microbial diversity declined in phase II but increased in phase III. The primary phylum was Proteobacteria during three phases, while the dominant class at points A and B was Betaproteobacteria (>49%) during all phases, but that changed to Holophagae in phase II (62.7%) and Actinobacteria in phase III (35.6%) for point C, which was closely related to its water quality. More remarkable community shift was found at the genus level. In addition, analysis results showed that water quality could significantly affect microbial diversity together, while the nutrient composition (e.g. C/N ration) of the water environment might determine the microbial community. Furthermore, Mycobacterium spp. and Pseudomonas spp. were detected in the biofilm, which should give rise to attention. This study revealed that water source switching produced substantial impact on the biofilm community. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Integration of Geodata in Documenting Castle Ruins

    NASA Astrophysics Data System (ADS)

    Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.

    2016-06-01

    Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.

  19. Cylindrical angular spectrum using Fourier coefficients of point light source and its application to fast hologram calculation.

    PubMed

    Oh, Seungtaik; Jeong, Il Kwon

    2015-11-16

    We will introduce a new simple analytic formula of the Fourier coefficient of the 3D field distribution of a point light source to generate a cylindrical angular spectrum which captures the object wave in 360° in the 3D Fourier space. Conceptually, the cylindrical angular spectrum can be understood as a cylindrical version of the omnidirectional spectral approach of Sando et al. Our Fourier coefficient formula is based on an intuitive observation that a point light radiates uniformly in all directions. Our formula is defined over all frequency vectors lying on the entire sphere in the 3D Fourier space and is more natural and computationally more efficient for all around recording of the object wave than that of the previous omnidirectional spectral method. A generalized frequency-based occlusion culling method for an arbitrary complex object is also proposed to enhance the 3D quality of a hologram. As a practical application of the cylindrical angular spectrum, an interactive hologram example is presented together with implementation details.

  20. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  1. Improved moving source photometry with TRIPPy

    NASA Astrophysics Data System (ADS)

    Alexandersen, Mike; Fraser, Wesley Cristopher

    2017-10-01

    Photometry of moving sources is more complicated than for stationary sources, because the sources trail their signal out over more pixels than a point source of the same magnitude. Using a circular aperture of same size as would be appropriate for point sources can cut out a large amount of flux if a moving source moves substantially relative to the size of the aperture during the exposure, resulting in underestimated fluxes. Using a large circular aperture can mitigate this issue at the cost of a significantly reduced signal to noise compared to a point source, as a result of the inclusion of a larger background region within the aperture.Trailed Image Photometry in Python (TRIPPy) solves this problem by using a pill-shaped aperture: the traditional circular aperture is sliced in half perpendicular to the direction of motion and separated by a rectangle as long as the total motion of the source during the exposure. TRIPPy can also calculate the appropriate aperture correction (which will depend both on the radius and trail length of the pill-shaped aperture), and has features for selecting good PSF stars, creating a PSF model (convolved moffat profile + lookup table) and selecting a custom sky-background area in order to ensure no other sources contribute to the background estimate.In this poster, we present an overview of the TRIPPy features and demonstrate the improvements resulting from using TRIPPy compared to photometry obtained by other methods with examples from real projects where TRIPPy has been implemented in order to obtain the best-possible photometric measurements of Solar System objects. While TRIPPy has currently mainly been used for Trans-Neptunian Objects, the improvement from using the pill-shaped aperture increases with source motion, making TRIPPy highly relevant for asteroid and centaur photometry as well.

  2. Using Lunar Observations to Validate Pointing Accuracy and Geolocation, Detector Sensitivity Stability and Static Point Response of the CERES Instruments

    NASA Technical Reports Server (NTRS)

    Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-01-01

    Validation of in-orbit instrument performance is a function of stability in both instrument and calibration source. This paper describes a method using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. The Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, these in-orbit observations have become standardized and compiled for the Flight Models -1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance measurements studied are detector sensitivity stability, pointing accuracy and static detector point response function. This validation method also shows trends per CERES data channel of 0.8% per decade or less for Flight Models 1-4. Using instrument gimbal data and computed lunar position, the pointing error of each detector telescope, the accuracy and consistency of the alignment between the detectors can be determined. The maximum pointing error was 0.2 Deg. in azimuth and 0.17 Deg. in elevation which corresponds to an error in geolocation near nadir of 2.09 km. With the exception of one detector, all instruments were found to have consistent detector alignment from 2006 to present. All alignment error was within 0.1o with most detector telescopes showing a consistent alignment offset of less than 0.02 Deg.

  3. On the possibility of singularities in the acoustic field of supersonic sources when BEM is applied to a wave equation

    NASA Technical Reports Server (NTRS)

    De Bernardis, E.; Farassat, F.

    1989-01-01

    Using a time domain method based on the Ffowcs Williams-Hawkings equation, a reliable explanation is provided for the origin of singularities observed in the numerical prediction of supersonic propeller noise. In the last few years Tam and, more recently, Amiet have analyzed the phenomenon from different points of view. The method proposed here offers a clear interpretation of the singularities based on a new description of sources, relating to the behavior of lines where the propeller blade surface exhibit slope discontinuity.

  4. Measuring Seebeck Coefficient

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey (Inventor)

    2015-01-01

    A high temperature Seebeck coefficient measurement apparatus and method with various features to minimize typical sources of errors is described. Common sources of temperature and voltage measurement errors which may impact accurate measurement are identified and reduced. Applying the identified principles, a high temperature Seebeck measurement apparatus and method employing a uniaxial, four-point geometry is described to operate from room temperature up to 1300K. These techniques for non-destructive Seebeck coefficient measurements are simple to operate, and are suitable for bulk samples with a broad range of physical types and shapes.

  5. Fabrication of Fiber Optic Grating Apparatus and Method

    NASA Technical Reports Server (NTRS)

    Wang, Ying (Inventor); Sharma, Anup (Inventor); Grant, Joseph (Inventor)

    2005-01-01

    An apparatus and method for forming a Bragg grating on an optical fiber using a phase mask to diffract a beam of coherent energy and a lens combined with a pair of mirrors to produce two symmetrical virtual point sources of coherent energy in the plane of the optical fiber. The two virtual light sources produce an interference pattern along the optical fiber. In a further embodiment, the period of the pattern and therefore the Bragg wavelength grating applied to the fiber is varied with the position of the optical fiber relative the lens.

  6. SU-G-201-13: Investigation of Dose Variation Induced by HDR Ir-192 Source Global Shift Within the Varian Ring Applicator Using Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y; Cai, J; Meltsner, S

    2016-06-15

    Purpose: The Varian tandem and ring applicators are used to deliver HDR Ir-192 brachytherapy for cervical cancer. The source path within the ring is hard to predict due to the larger interior ring lumen. Some studies showed the source could be several millimeters different from planned positions, while other studies demonstrated minimal dosimetric impact. A global shift can be applied to limit the effect of positioning offsets. The purpose of this study was to assess the necessities of implementing a global source shift using Monte Carlo (MC) simulations. Methods: The MCNP5 radiation transport code was used for all MC simulations.more » To accommodate TG-186 guidelines and eliminate inter-source attenuation, a BrachyVision plan with 10 dwell positions (0.5cm step sizes) was simulated as the summation of 10 individual sources with equal dwell times for simplification. To simplify the study, the tandem was also excluded from the MC model. Global shifts of ±0.1, ±0.3, ±0.5 cm were then simulated as distal and proximal from the reference positions. Dose was scored in water for all MC simulations and was normalized to 100% at the normalization point 0.5 cm from the cap in the ring plane. For dose comparison, Point A was 2 cm caudal from the buildup cap and 2 cm lateral on either side of the ring axis. With seventy simulations, 108 photon histories gave a statistical uncertainties (k=1) <2% for (0.1 cm)3 voxels. Results: Compared to no global shift, average Point A doses were 0.0%, 0.4%, and 2.2% higher for distal global shifts, and 0.4%, 2.8%, and 5.1% higher for proximal global shifts, respectively. The MC Point A doses differed by < 1% when compared to BrachyVision. Conclusion: Dose variations were not substantial for ±0.3 cm global shifts, which is common in clinical practice.« less

  7. SU-E-T-46: Application of a Twin-Detector Method for the Determination of the Mean Photon Energy Em at Points of Measurement in a Water Phantom Surrounding a GammaMed HDR 192Ir Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chofor, N; Poppe, B; Nebah, F

    Purpose: In a brachytherapy photon field in water the fluence-averaged mean photon energy Em at the point of measurement correlates with the radiation quality correction factor kQ of a non water-equivalent detector. To support the experimental assessment of Em, we show that the normalized signal ratio NSR of a pair of radiation detectors, an unshielded silicon diode and a diamond detector can serve to measure quantity Em in a water phantom at a Ir-192 unit. Methods: Photon fluence spectra were computed in EGSnrc based on a detailed model of the GammaMed source. Factor kQ was calculated as the ratio ofmore » the detector's spectrum-weighted responses under calibration conditions at a 60Co unit and under brachytherapy conditions at various radial distances from the source. The NSR was investigated for a pair of a p-type unshielded silicon diode 60012 and a synthetic single crystal diamond detector 60019 (both PTW Freiburg). Each detector was positioned according to its effective point of measurement, with its axis facing the source. Lateral signal profiles were scanned under complete scatter conditions, and the NSR was determined as the quotient of the signal ratio under application conditions x and that at position r-ref = 1 cm. Results: The radiation quality correction factor kQ shows a close correlation with the mean photon energy Em. The NSR of the diode/diamond pair changes by a factor of two from 0–18 cm from the source, while Em drops from 350 to 150 keV. Theoretical and measured NSR profiles agree by ± 2 % for points within 5 cm from the source. Conclusion: In the presence of the close correlation between radiation quality correction factor kQ and photon mean energy Em, the NSR provides a practical means of assessing Em under clinical conditions. Precise detector positioning is the major challenge.« less

  8. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    NASA Astrophysics Data System (ADS)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10-6 Mpc-3 and neutrino luminosity Lν lesssim 1042 erg s-1 (1041 erg s-1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  9. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  10. Determination of optical properties in heterogeneous turbid media using a cylindrical diffusing fiber

    NASA Astrophysics Data System (ADS)

    Dimofte, Andreea; Finlay, Jarod C.; Liang, Xing; Zhu, Timothy C.

    2012-10-01

    For interstitial photodynamic therapy (PDT), cylindrical diffusing fibers (CDFs) are often used to deliver light. This study examines the feasibility and accuracy of using CDFs to characterize the absorption (μa) and reduced scattering (μ‧s) coefficients of heterogeneous turbid media. Measurements were performed in tissue-simulating phantoms with μa between 0.1 and 1 cm-1 and μ‧s between 3 and 10 cm-1 with CDFs 2 to 4 cm in length. Optical properties were determined by fitting the measured light fluence rate profiles at a fixed distance from the CDF axis using a heterogeneous kernel model in which the cylindrical diffusing fiber is treated as a series of point sources. The resulting optical properties were compared with independent measurement using a point source method. In a homogenous medium, we are able to determine the absorption coefficient μa using a value of μ‧s determined a priori (uniform fit) or μ‧s obtained by fitting (variable fit) with standard (maximum) deviations of 6% (18%) and 18% (44%), respectively. However, the CDF method is found to be insensitive to variations in μ‧s, thus requiring a complementary method such as using a point source for determination of μ‧s. The error for determining μa decreases in very heterogeneous turbid media because of the local absorption extremes. The data acquisition time for obtaining the one-dimensional optical properties distribution is less than 8 s. This method can result in dramatically improved accuracy of light fluence rate calculation for CDFs for prostate PDT in vivo when the same model and geometry is used for forward calculations using the extrapolated tissue optical properties.

  11. A 3D tomographic reconstruction method to analyze Jupiter's electron-belt emission observations

    NASA Astrophysics Data System (ADS)

    Santos-Costa, Daniel; Girard, Julien; Tasse, Cyril; Zarka, Philippe; Kita, Hajime; Tsuchiya, Fuminori; Misawa, Hiroaki; Clark, George; Bagenal, Fran; Imai, Masafumi; Becker, Heidi N.; Janssen, Michael A.; Bolton, Scott J.; Levin, Steve M.; Connerney, John E. P.

    2017-04-01

    Multi-dimensional reconstruction techniques of Jupiter's synchrotron radiation from radio-interferometric observations were first developed by Sault et al. [Astron. Astrophys., 324, 1190-1196, 1997]. The tomographic-like technique introduced 20 years ago had permitted the first 3-dimensional mapping of the brightness distribution around the planet. This technique has demonstrated the advantage to be weakly dependent on planetary field models. It also does not require any knowledge on the energy and spatial distributions of the radiating electrons. On the downside, it is assumed that the volume emissivity of any punctual point source around the planet is isotropic. This assumption becomes incorrect when mapping the brightness distribution for non-equatorial point sources or any point sources from Juno's perspective. In this paper, we present our modeling effort to bypass the isotropy issue. Our approach is to use radio-interferometric observations and determine the 3-D brightness distribution in a cylindrical coordinate system. For each set (z, r), we constrain the longitudinal distribution with a Fourier series and the anisotropy is addressed with a simple periodic function when possible. We develop this new method over a wide range of frequencies using past VLA and LOFAR observations of Jupiter. We plan to test this reconstruction method with observations of Jupiter that are currently being carried out with LOFAR and GMRT in support to the Juno mission. We describe how this new 3D tomographic reconstruction method provides new model constraints on the energy and spatial distributions of Jupiter's ultra-relativistic electrons close to the planet and be used to interpret Juno MWR observations of Jupiter's electron-belt emission and assist in evaluating the background noise from the radiation environment in the atmospheric measurements.

  12. Inferring Models of Bacterial Dynamics toward Point Sources

    PubMed Central

    Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve

    2015-01-01

    Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373

  13. Bulk and surface event identification in p-type germanium detectors

    NASA Astrophysics Data System (ADS)

    Yang, L. T.; Li, H. B.; Wong, H. T.; Agartioglu, M.; Chen, J. H.; Jia, L. P.; Jiang, H.; Li, J.; Lin, F. K.; Lin, S. T.; Liu, S. K.; Ma, J. L.; Sevda, B.; Sharma, V.; Singh, L.; Singh, M. K.; Singh, M. K.; Soma, A. K.; Sonay, A.; Yang, S. W.; Wang, L.; Wang, Q.; Yue, Q.; Zhao, W.

    2018-04-01

    The p-type point-contact germanium detectors have been adopted for light dark matter WIMP searches and the studies of low energy neutrino physics. These detectors exhibit anomalous behavior to events located at the surface layer. The previous spectral shape method to identify these surface events from the bulk signals relies on spectral shape assumptions and the use of external calibration sources. We report an improved method in separating them by taking the ratios among different categories of in situ event samples as calibration sources. Data from CDEX-1 and TEXONO experiments are re-examined using the ratio method. Results are shown to be consistent with the spectral shape method.

  14. Generic effective source for scalar self-force calculations

    NASA Astrophysics Data System (ADS)

    Wardell, Barry; Vega, Ian; Thornburg, Jonathan; Diener, Peter

    2012-05-01

    A leading approach to the modeling of extreme mass ratio inspirals involves the treatment of the smaller mass as a point particle and the computation of a regularized self-force acting on that particle. In turn, this computation requires knowledge of the regularized retarded field generated by the particle. A direct calculation of this regularized field may be achieved by replacing the point particle with an effective source and solving directly a wave equation for the regularized field. This has the advantage that all quantities are finite and require no further regularization. In this work, we present a method for computing an effective source which is finite and continuous everywhere, and which is valid for a scalar point particle in arbitrary geodesic motion in an arbitrary background spacetime. We explain in detail various technical and practical considerations that underlie its use in several numerical self-force calculations. We consider as examples the cases of a particle in a circular orbit about Schwarzschild and Kerr black holes, and also the case of a particle following a generic timelike geodesic about a highly spinning Kerr black hole. We provide numerical C code for computing an effective source for various orbital configurations about Schwarzschild and Kerr black holes.

  15. Mathematical design of a novel input/instruction device using a moving acoustic emitter

    NASA Astrophysics Data System (ADS)

    Wang, Xianchao; Guo, Yukun; Li, Jingzhi; Liu, Hongyu

    2017-10-01

    This paper is concerned with the mathematical design of a novel input/instruction device using a moving emitter. The emitter acts as a point source and can be installed on a digital pen or worn on the finger of the human being who desires to interact/communicate with the computer. The input/instruction can be recognized by identifying the moving trajectory of the emitter performed by the human being from the collected wave field data. The identification process is modelled as an inverse source problem where one intends to identify the trajectory of a moving point source. There are several salient features of our study which distinguish our result from the existing ones in the literature. First, the point source is moving in an inhomogeneous background medium, which models the human body. Second, the dynamical wave field data are collected in a limited aperture. Third, the reconstruction method is independent of the background medium, and it is totally direct without any matrix inversion. Hence, it is efficient and robust with respect to the measurement noise. Both theoretical justifications and computational experiments are presented to verify our novel findings.

  16. Geospatial Field Methods: An Undergraduate Course Built Around Point Cloud Construction and Analysis to Promote Spatial Learning and Use of Emerging Technology in Geoscience

    NASA Astrophysics Data System (ADS)

    Bunds, M. P.

    2017-12-01

    Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models, assessed for accuracy, and analyzed in Geographic Information System software. Student projects have included mapping and analyzing landslide morphology, fault scarps, and earthquake ground surface rupture. Students have praised the geospatial skills they learn, whereas helping them stay on schedule to finish their projects is a challenge.

  17. NON-POINT SOURCE POLLUTION

    EPA Science Inventory

    Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...

  18. Using Socratic Questioning in the Classroom.

    ERIC Educational Resources Information Center

    Moore, Lori; Rudd, Rick

    2002-01-01

    Describes the Socratic questioning method and discusses its use in the agricultural education classroom. Presents a four-step model: origin and source of point of view; support, reasons, evidence, and assumptions; conflicting views; and implications and consequences. (JOW)

  19. A new false color composite technique for dust enhancement and point source determination in Middle East

    NASA Astrophysics Data System (ADS)

    Karimi, Khadijeh; Taheri Shahraiyni, Hamid; Habibi Nokhandan, Majid; Hafezi Moghaddas, Naser; Sanaeifar, Melika

    2011-11-01

    The dust storm happens in the Middle East with very high frequency. According to the dust storm effects, it is vital to study on the dust storms in the Middle East. The first step toward the study on dust storm is the enhancement of dust storms and determination of the point sources. In this paper, a new false color composite (FCC) map for the dust storm enhancement and point sources determination in the Middle East has been developed. The 28 Terra-MODIS images in 2008 and 2009 were utilized in this study. We tried to replace the Red, Green and Blue bands in RGB maps with the bands or maps that enhance the dust storms. Hence, famous indices for dust storm detection (NDDI, D and BTD) were generated using the different bands of MODIS images. These indices with some bands of MODIS were utilized for FCC map generation with different combinations. Among the different combinations, four better FCC maps were selected and these four FCC are compared using visual interpretation. The results of visual interpretations showed that the best FCC map for enhancement of dust storm in the middle east is an especial combination of the three indices (Red: D, Green: BTD and Blue: NDDI). Therefore, we utilized of this new FCC method for the enhancement of dust storms and determination of point sources in Middle East.

  20. Maximum power point tracking for photovoltaic applications by using two-level DC/DC boost converter

    NASA Astrophysics Data System (ADS)

    Moamaei, Parvin

    Recently, photovoltaic (PV) generation is becoming increasingly popular in industrial applications. As a renewable and alternative source of energy they feature superior characteristics such as being clean and silent along with less maintenance problems compared to other sources of the energy. In PV generation, employing a Maximum Power Point Tracking (MPPT) method is essential to obtain the maximum available solar energy. Among several proposed MPPT techniques, the Perturbation and Observation (P&O;) and Model Predictive Control (MPC) methods are adopted in this work. The components of the MPPT control system which are P&O; and MPC algorithms, PV module and high gain DC-DC boost converter are simulated in MATLAB Simulink. They are evaluated theoretically under rapidly and slowly changing of solar irradiation and temperature and their performance is shown by the simulation results, finally a comprehensive comparison is presented.

  1. Detector Position Estimation for PET Scanners.

    PubMed

    Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul

    2012-06-11

    Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.

  2. Control method for peak power delivery with limited DC-bus voltage

    DOEpatents

    Edwards, John; Xu, Longya; Bhargava, Brij B.

    2006-09-05

    A method for driving a neutral point-clamped multi-level voltage source inverter supplying a synchronous motor is provided. A DC current is received at a neutral point-clamped multi-level voltage source inverter. The inverter has first, second, and third output nodes. The inverter also has a plurality of switches. A desired speed of a synchronous motor connected to the inverter by the first second and third nodes is received by the inverter. The synchronous motor has a rotor and the speed of the motor is defined by the rotational rate of the rotor. A position of the rotor is sensed, current flowing to the motor out of at least two of the first, second, and third output nodes is sensed, and predetermined switches are automatically activated by the inverter responsive to the sensed rotor position, the sensed current, and the desired speed.

  3. Neutron crosstalk between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.

    2015-05-01

    We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less

  4. On the VHF Source Retrieval Errors Associated with Lightning Mapping Arrays (LMAs)

    NASA Technical Reports Server (NTRS)

    Koshak, W.

    2016-01-01

    This presentation examines in detail the standard retrieval method: that of retrieving the (x, y, z, t) parameters of a lightning VHF point source from multiple ground-based Lightning Mapping Array (LMA) time-of-arrival (TOA) observations. The solution is found by minimizing a chi-squared function via the Levenberg-Marquardt algorithm. The associated forward problem is examined to illustrate the importance of signal-to-noise ratio (SNR). Monte Carlo simulated retrievals are used to assess the benefits of changing various LMA network properties. A generalized retrieval method is also introduced that, in addition to TOA data, uses LMA electric field amplitude measurements to retrieve a transient VHF dipole moment source.

  5. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  6. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Shinn, J. L.

    1986-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.

  7. Continuous wavelet transform and Euler deconvolution method and their application to magnetic field data of Jharia coalfield, India

    NASA Astrophysics Data System (ADS)

    Singh, Arvind; Singh, Upendra Kumar

    2017-02-01

    This paper deals with the application of continuous wavelet transform (CWT) and Euler deconvolution methods to estimate the source depth using magnetic anomalies. These methods are utilized mainly to focus on the fundamental issue of mapping the major coal seam and locating tectonic lineaments. The main aim of the study is to locate and characterize the source of the magnetic field by transferring the data into an auxiliary space by CWT. The method has been tested on several synthetic source anomalies and finally applied to magnetic field data from Jharia coalfield, India. Using magnetic field data, the mean depth of causative sources points out the different lithospheric depth over the study region. Also, it is inferred that there are two faults, namely the northern boundary fault and the southern boundary fault, which have an orientation in the northeastern and southeastern direction respectively. Moreover, the central part of the region is more faulted and folded than the other parts and has sediment thickness of about 2.4 km. The methods give mean depth of the causative sources without any a priori information, which can be used as an initial model in any inversion algorithm.

  8. Analytical Methods to Evaluate the Quality of Edible Fats and Oils: The JOCS Standard Methods for Analysis of Fats, Oils and Related Materials (2013) and Advanced Methods.

    PubMed

    Endo, Yasushi

    2018-01-01

    Edible fats and oils are among the basic components of the human diet, along with carbohydrates and proteins, and they are the source of high energy and essential fatty acids such as linoleic and linolenic acids. Edible fats and oils are used in for pan- and deep-frying, and in salad dressing, mayonnaise and processed foods such as chocolates and cream. The physical and chemical properties of edible fats and oils can affect the quality of oil foods and hence must be evaluated in detail. The physical characteristics of edible fats and oils include color, specific gravity, refractive index, melting point, congeal point, smoke point, flash point, fire point, and viscosity, while the chemical characteristics include acid value, saponification value, iodine value, fatty acid composition, trans isomers, triacylglycerol composition, unsaponifiable matters (sterols, tocopherols) and minor components (phospholipids, chlorophyll pigments, glycidyl fatty acid esters). Peroxide value, p-anisidine value, carbonyl value, polar compounds and polymerized triacylglycerols are indexes of the deterioration of edible fats and oils. This review describes the analytical methods to evaluate the quality of edible fats and oils, especially the Standard Methods for Analysis of Fats, Oils and Related Materials edited by Japan Oil Chemists' Society (the JOCS standard methods) and advanced methods.

  9. A 3D modeling approach to complex faults with multi-source data

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  10. Microbial source tracking: a tool for identifying sources of microbial contamination in the food chain.

    PubMed

    Fu, Ling-Lin; Li, Jian-Rong

    2014-01-01

    The ability to trace fecal indicators and food-borne pathogens to the point of origin has major ramifications for food industry, food regulatory agencies, and public health. Such information would enable food producers and processors to better understand sources of contamination and thereby take corrective actions to prevent transmission. Microbial source tracking (MST), which currently is largely focused on determining sources of fecal contamination in waterways, is also providing the scientific community tools for tracking both fecal bacteria and food-borne pathogens contamination in the food chain. Approaches to MST are commonly classified as library-dependent methods (LDMs) or library-independent methods (LIMs). These tools will have widespread applications, including the use for regulatory compliance, pollution remediation, and risk assessment. These tools will reduce the incidence of illness associated with food and water. Our aim in this review is to highlight the use of molecular MST methods in application to understanding the source and transmission of food-borne pathogens. Moreover, the future directions of MST research are also discussed.

  11. Separating Turbofan Engine Noise Sources Using Auto and Cross Spectra from Four Microphones

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2008-01-01

    The study of core noise from turbofan engines has become more important as noise from other sources such as the fan and jet were reduced. A multiple-microphone and acoustic-source modeling method to separate correlated and uncorrelated sources is discussed. The auto- and cross spectra in the frequency range below 1000 Hz are fitted with a noise propagation model based on a source couplet consisting of a single incoherent monopole source with a single coherent monopole source or a source triplet consisting of a single incoherent monopole source with two coherent monopole point sources. Examples are presented using data from a Pratt& Whitney PW4098 turbofan engine. The method separates the low-frequency jet noise from the core noise at the nozzle exit. It is shown that at low power settings, the core noise is a major contributor to the noise. Even at higher power settings, it can be more important than jet noise. However, at low frequencies, uncorrelated broadband noise and jet noise become the important factors as the engine power setting is increased.

  12. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  13. Maximum power point tracking algorithm based on sliding mode and fuzzy logic for photovoltaic sources under variable environmental conditions

    NASA Astrophysics Data System (ADS)

    Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.

    2017-02-01

    Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.

  14. Wideband RELAX and wideband CLEAN for aeroacoustic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  15. Wideband RELAX and wideband CLEAN for aeroacoustic imaging.

    PubMed

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  16. Selective structural source identification

    NASA Astrophysics Data System (ADS)

    Totaro, Nicolas

    2018-04-01

    In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.

  17. Skyshine analysis using energy and angular dependent dose-contribution fluxes obtained from air-over-ground adjoint calculation.

    PubMed

    Uematsu, Mikio; Kurosawa, Masahiko

    2005-01-01

    A generalised and convenient skyshine dose analysis method has been developed based on forward-adjoint folding technique. In the method, the air penetration data were prepared by performing an adjoint DOT3.5 calculation with cylindrical air-over-ground geometry having an adjoint point source (importance of unit flux to dose rate at detection point) in the centre. The accuracy of the present method was certified by comparing with DOT3.5 forward calculation. The adjoint flux data can be used as generalised radiation skyshine data for all sorts of nuclear facilities. Moreover, the present method supplies plenty of energy-angular dependent contribution flux data, which will be useful for detailed shielding design of facilities.

  18. Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco

    2016-08-01

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.

  19. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    NASA Astrophysics Data System (ADS)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  20. [The validation of the effect of correcting spectral background changes based on floating reference method by simulation].

    PubMed

    Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin

    2015-02-01

    There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.

  1. Effects of Environmental Toxicants on Metabolic Activity of Natural Microbial Communities

    PubMed Central

    Barnhart, Carole L. H.; Vestal, J. Robie

    1983-01-01

    Two methods of measuring microbial activity were used to study the effects of toxicants on natural microbial communities. The methods were compared for suitability for toxicity testing, sensitivity, and adaptability to field applications. This study included measurements of the incorporation of 14C-labeled acetate into microbial lipids and microbial glucosidase activity. Activities were measured per unit biomass, determined as lipid phosphate. The effects of various organic and inorganic toxicants on various natural microbial communities were studied. Both methods were useful in detecting toxicity, and their comparative sensitivities varied with the system studied. In one system, the methods showed approximately the same sensitivities in testing the effects of metals, but the acetate incorporation method was more sensitive in detecting the toxicity of organic compounds. The incorporation method was used to study the effects of a point source of pollution on the microbiota of a receiving stream. Toxic doses were found to be two orders of magnitude higher in sediments than in water taken from the same site, indicating chelation or adsorption of the toxicant by the sediment. The microbiota taken from below a point source outfall was 2 to 100 times more resistant to the toxicants tested than was that taken from above the outfall. Downstream filtrates in most cases had an inhibitory effect on the natural microbiota taken from above the pollution source. The microbial methods were compared with commonly used bioassay methods, using higher organisms, and were found to be similar in ability to detect comparative toxicities of compounds, but were less sensitive than methods which use standard media because of the influences of environmental factors. PMID:16346432

  2. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Methane - quick fix or tough target? New methods to reduce emissions.

    NASA Astrophysics Data System (ADS)

    Nisbet, E. G.; Lowry, D.; Fisher, R. E.; Brownlow, R.

    2016-12-01

    Methane is a cost-effective target for greenhouse gas reduction efforts. The UK's MOYA project is designed to improve understanding of the global methane budget and to point to new methods to reduce future emissions. Since 2007, methane has been increasing rapidly: in 2014 and 2015 growth was at rates last seen in the 1980s. Unlike 20thcentury growth, primarily driven by fossil fuel emissions in northern industrial nations, isotopic evidence implies present growth is driven by tropical biogenic sources such as wetlands and agriculture. Discovering why methane is rising is important. Schaefer et al. (Science, 2016) pointed out the potential clash between methane reduction efforts and food needs of a rising, better-fed (physically larger) human population. Our own work suggests tropical wetlands are major drivers of growth, responding to weather changes since 2007, but there is no acceptable way to reduce wetland emission. Just as sea ice decline indicates Arctic warming, methane may be the most obvious tracker of climate change in the wet tropics. Technical advances in instrumentation can do much in helping cut urban and industrial methane emissions. Mobile systems can be mounted on vehicles, while drone sampling can provide a 3D view to locate sources. Urban land planning often means large but different point sources are typically clustered (e.g. landfill or sewage plant near incinerator; gas wells next to cattle). High-precision grab-sample isotopic characterisation, using Keeling plots, can separate source signals, to identify specific emitters, even where they are closely juxtaposed. Our mobile campaigns in the UK, Kuwait, Hong Kong and E. Australia show the importance of major single sources, such as abandoned old wells, pipe leaks, or unregulated landfills. If such point sources can be individually identified, even when clustered, they will allow effective reduction efforts to occur: these can be profitable and/or improve industrial safety, for example in the case of gas leaks. Fossil fuels, landfills, waste, and biomass burning emit about 200 Tg/yr, or 35-40% of global methane emissions. Using inexpensive 3D mobile surveys coupled with high-precision isotopic measurement, it should be possible to cut emissions sharply, substantially reducing the methane burden even if tropical biogenic sources increase.

  4. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. System and method for disrupting suspect objects

    DOEpatents

    Gladwell, T. Scott; Garretson, Justin R; Hobart, Clinton G; Monda, Mark J

    2013-07-09

    A system and method for disrupting at least one component of a suspect object is provided. The system includes a source for passing radiation through the suspect object, a screen for receiving the radiation passing through the suspect object and generating at least one image therefrom, a weapon having a discharge deployable therefrom, and a targeting unit. The targeting unit displays the image(s) of the suspect object and aims the weapon at a disruption point on the displayed image such that the weapon may be positioned to deploy the discharge at the disruption point whereby the suspect object is disabled.

  6. IKT 16: the first X-ray confirmed composite SNR in the SMC

    NASA Astrophysics Data System (ADS)

    Maitra, C.; Ballet, J.; Filipović, M. D.; Haberl, F.; Tiengo, A.; Grieve, K.; Roper, Q.

    2015-12-01

    Aims: IKT 16 is an X-ray and radio-faint supernova remnant (SNR) in the Small Magellanic Cloud (SMC). A detailed X-ray study of this SNR with XMM-Newton confirmed the presence of a hard X-ray source near its centre, indicating the detection of the first composite SNR in the SMC. With a dedicated Chandra observation we aim to resolve the point source and confirm its nature. We also acquire new ATCA observations of the source at 2.1 GHz with improved flux density estimates and resolution. Methods: We perform detailed spatial and spectral analysis of the source. With the highest resolution X-ray and radio image of the centre of the SNR available today, we resolve the source and confirm its pulsar wind nebula (PWN) nature. Further, we constrain the geometrical parameters of the PWN and perform spectral analysis for the point source and the PWN separately. We also test for the radial variations of the PWN spectrum and its possible east west asymmetry. Results: The X-ray source at the centre of IKT 16 can be resolved into a symmetrical elongated feature centring a point source, the putative pulsar. Spatial modelling indicates an extent of 5.2'' of the feature with its axis inclined at 82° east from north, aligned with a larger radio feature consisting of two lobes almost symmetrical about the X-ray source. The picture is consistent with a PWN which has not yet collided with the reverse shock. The point source is about three times brighter than the PWN and has a hard spectrum of spectral index 1.1 compared to a value 2.2 for the PWN. This points to the presence of a pulsar dominated by non-thermal emission. The expected Ė is ~1037 erg s-1 and spin period <100 ms. However, the presence of a compact nebula unresolved by Chandra at the distance of the SMC cannot completely be ruled out. The reduced images (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/584/A41

  7. Assessment and Optimization of the Accuracy of an Aircraft-Based Technique Used to Quantify Greenhouse Gas Emission Rates from Point Sources

    NASA Astrophysics Data System (ADS)

    Shepson, P. B.; Lavoie, T. N.; Kerlo, A. E.; Stirm, B. H.

    2016-12-01

    Understanding the contribution of anthropogenic activities to atmospheric greenhouse gas concentrations requires an accurate characterization of emission sources. Previously, we have reported the use of a novel aircraft-based mass balance measurement technique to quantify greenhouse gas emission rates from point and area sources, however, the accuracy of this approach has not been evaluated to date. Here, an assessment of method accuracy and precision was performed by conducting a series of six aircraft-based mass balance experiments at a power plant in southern Indiana and comparing the calculated CO2 emission rates to the reported hourly emission measurements made by continuous emissions monitoring systems (CEMS) installed directly in the exhaust stacks at the facility. For all flights, CO2 emissions were quantified before CEMS data were released online to ensure unbiased analysis. Additionally, we assess the uncertainties introduced to the final emission rate caused by our analysis method, which employs a statistical kriging model to interpolate and extrapolate the CO2 fluxes across the flight transects from the ground to the top of the boundary layer. Subsequently, using the results from these flights combined with the known emissions reported by the CEMS, we perform an inter-model comparison of alternative kriging methods to evaluate the performance of the kriging approach.

  8. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  9. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  10. High-field neutral beam injection for improving the Q of a gas dynamic trap-based fusion neutron source

    NASA Astrophysics Data System (ADS)

    Zeng, Qiusun; Chen, Dehong; Wang, Minghuang

    2017-12-01

    In order to improve the fusion energy gain (Q) of a gas dynamic trap (GDT)-based fusion neutron source, a method in which the neutral beam is obliquely injected at a higher magnetic field position rather than at the mid-plane of the GDT is proposed. This method is beneficial for confining a higher density of fast ions at the turning point in the zone with a higher magnetic field, as well as obtaining a higher mirror ratio by reducing the mid-plane field rather than increasing the mirror field. In this situation, collision scattering loss of fast ions with higher density will occur and change the confinement time, power balance and particle balance. Using an updated calculation model with high-field neutral beam injection for a GDT-based fusion neutron source conceptual design, we got four optimal design schemes for a GDT-based fusion neutron source in which Q was improved to two- to three-fold compared with a conventional design scheme and considering the limitation for avoiding plasma instabilities, especially the fire-hose instability. The distribution of fast ions could be optimized by building a proper magnetic field configuration with enough space for neutron shielding and by multi-beam neutral particle injection at different axial points.

  11. Tapering the sky response for angular power spectrum estimation from low-frequency radio-interferometric data.

    PubMed

    Choudhuri, Samir; Bharadwaj, Somnath; Roy, Nirupam; Ghosh, Abhik; Ali, Sk Saiyad

    2016-06-11

    It is important to correctly subtract point sources from radio-interferometric data in order to measure the power spectrum of diffuse radiation like the Galactic synchrotron or the Epoch of Reionization 21-cm signal. It is computationally very expensive and challenging to image a very large area and accurately subtract all the point sources from the image. The problem is particularly severe at the sidelobes and the outer parts of the main lobe where the antenna response is highly frequency dependent and the calibration also differs from that of the phase centre. Here, we show that it is possible to overcome this problem by tapering the sky response. Using simulated 150 MHz observations, we demonstrate that it is possible to suppress the contribution due to point sources from the outer parts by using the Tapered Gridded Estimator to measure the angular power spectrum C ℓ of the sky signal. We also show from the simulation that this method can self-consistently compute the noise bias and accurately subtract it to provide an unbiased estimation of C ℓ .

  12. A program to calculate pulse transmission responses through transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei

    2018-05-01

    We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.

  13. Can satellite-based monitoring techniques be used to quantify volcanic CO2 emissions?

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Carn, Simon A.; Kuze, Akihiko; Kataoka, Fumie; Shiomi, Kei; Goto, Naoki; Popp, Christoph; Ajiro, Masataka; Suto, Hiroshi; Takeda, Toru; Kanekon, Sayaka; Sealing, Christine; Flower, Verity

    2014-05-01

    Since 2010, we investigate and improve possible methods to regularly target volcanic centers from space in order to detect volcanic carbon dioxide (CO2) point source anomalies, using the Japanese Greenhouse gas Observing SATellite (GOSAT). Our long-term goals are: (a) better spatial and temporal coverage of volcano monitoring techniques; (b) improvement of the currently highly uncertain global CO2 emission inventory for volcanoes, and (c) use of volcanic CO2 emissions for high altitude, strong point source emission and dispersion studies in atmospheric science. The difficulties posed by strong relief, orogenic clouds, and aerosols are minimized by a small field of view, enhanced spectral resolving power, by employing repeat target mode observation strategies, and by comparison to continuous ground based sensor network validation data. GOSAT is a single-instrument Earth observing greenhouse gas mission aboard JAXA's IBUKI satellite in sun-synchronous polar orbit. GOSAT's Fourier-Transform Spectrometer (TANSO-FTS) has been producing total column XCO2 data since January 2009, at a repeat cycle of 3 days, offering great opportunities for temporal monitoring of point sources. GOSAT's 10 km field of view can spatially integrate entire volcanic edifices within one 'shot' in precise target mode. While it doesn't have any spatial scanning or mapping capability, it does have strong spectral resolving power and agile pointing capability to focus on several targets of interest per orbit. Sufficient uncertainty reduction is achieved through comprehensive in-flight vicarious calibration, in close collaboration between NASA and JAXA. Challenges with the on-board pointing mirror system have been compensated for employing custom observation planning strategies, including repeat sacrificial upstream reference points to control pointing mirror motion, empirical individualized target offset compensation, observation pattern simulations to minimize view angle azimuth. Since summer 2010 we have conducted repeated target mode observations of now almost 40 persistently active global volcanoes and other point sources, including Etna (Italy), Mayon (Philippines), Hawaii (USA), Popocatepetl (Mexico), and Ambrym (Vanuatu), using GOSAT FTS SWIR data. In this presentation we will summarize results from over three years of measurements and progress toward understanding detectability with this method. In emerging collaboration with the Deep Carbon Observatory's DECADE program, the World Organization of Volcano Observatories (WOVO) global database of volcanic unrest (WOVOdat), and country specific observatories and agencies we see a growing potential for ground based validation synergies. Complementing the ongoing GOSAT mission, NASA is on schedule to launch its OCO-2 satellite in July 2014, which will provide higher spatial but lower temporal resolution. Further orbiting and geostationary satellite sensors are in planning at JAXA, NASA, and ESA.

  14. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  15. The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts

    NASA Astrophysics Data System (ADS)

    Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel

    2015-01-01

    Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in the slope below 200 mJy, indicating a strong evolution in number of density for galaxies at these fluxes. In general, models tend to overpredict the counts at brighter flux densities, underlying the importance of studying the Rayleigh-Jeans part of the spectral energy distribution to refine the theoretical recipes of the models. Our iterative method for source identification allowed the detection of a family of 500 μm sources that are not foreground objects belonging to Virgo and not found in other catalogs. Herschel is an ESA space observatory with science instruments provided by a European-led principal investigator consortia and with an important participation from NASA.The 250, 350, 500 μm, and the total catalogs are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A129

  16. A clustering algorithm for sample data based on environmental pollution characteristics

    NASA Astrophysics Data System (ADS)

    Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun

    2015-04-01

    Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.

  17. Poisson denoising on the sphere: application to the Fermi gamma ray space telescope

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2010-07-01

    The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.

  18. Reply to comment on "Validation of two innovative methods to measure contaminant mass flux in groundwater" by Goltz et al. (2009)

    NASA Astrophysics Data System (ADS)

    Goltz, Mark N.; Huang, Junqi

    2014-12-01

    We thank Sun (2014) for his comment on our paper, Goltz et al. (2009). The commenter basically makes two points: (1) equation (6) in Goltz et al. (2009) is incorrect, and (2) screen loss should be further considered as a source of error in the modified integral pump test (MIPT) experiment. We will address each of these points, below.

  19. Volume 2 - Point Sources

    EPA Pesticide Factsheets

    Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr

  20. Material point method of modelling and simulation of reacting flow of oxygen

    NASA Astrophysics Data System (ADS)

    Mason, Matthew; Chen, Kuan; Hu, Patrick G.

    2014-07-01

    Aerospace vehicles are continually being designed to sustain flight at higher speeds and higher altitudes than previously attainable. At hypersonic speeds, gases within a flow begin to chemically react and the fluid's physical properties are modified. It is desirable to model these effects within the Material Point Method (MPM). The MPM is a combined Eulerian-Lagrangian particle-based solver that calculates the physical properties of individual particles and uses a background grid for information storage and exchange. This study introduces chemically reacting flow modelling within the MPM numerical algorithm and illustrates a simple application using the AeroElastic Material Point Method (AEMPM) code. The governing equations of reacting flows are introduced and their direct application within an MPM code is discussed. A flow of 100% oxygen is illustrated and the results are compared with independently developed computational non-equilibrium algorithms. Observed trends agree well with results from an independently developed source.

  1. Challenging the distributed temperature sensing technique for estimating groundwater discharge to streams through controlled artificial point source experiment

    NASA Astrophysics Data System (ADS)

    Lauer, F.; Frede, H.-G.; Breuer, L.

    2012-04-01

    Spatially confined groundwater discharge can contribute significantly to stream discharge. Distributed fibre optic temperature sensing (DTS) of stream water has been successfully used to localize- and quantify groundwater discharge from this type "point sources" (PS) in small first-order streams. During periods when stream and groundwater temperatures differ PS appear as abrupt step in longitudinal stream water temperature distribution. Based on stream temperature observation up- and downstream of a point source and estimated or measured groundwater temperature the proportion of groundwater inflow to stream discharge can be quantified using simple mixing models. However so far this method has not been quantitatively verified, nor has a detailed uncertainty analysis of the method been conducted. The relative accuracy of this method is expected to decrease nonlinear with decreasing proportions of lateral inflow. Furthermore it depends on the temperature differences (ΔT) between groundwater and surface water and on the accuracy of temperature measurement itself. The latter could be affected by different sources of errors. For example it has been shown that a direct impact of solar radiation on fibre optic cables can lead to errors in temperature measurements in small streams due to low water depth. Considerable uncertainty might also be related to the determination of groundwater temperature through direct measurements or derived from the DTS signal. In order to directly validate the method and asses it's uncertainty we performed a set of artificial point source experiments with controlled lateral inflow rates to a natural stream. The experiments were carried out at the Vollnkirchener Bach, a small head water stream in Hessen, Germany in November and December 2011 during a low flow period. A DTS system was installed along a 1.2 km sub reach of the stream. Stream discharge was measured using a gauging flume installed directly upstream of the artificial PS. Lateral inflow was simulated using a pumping system connected to a 2 m3 water tank. Pumping rates were controlled using a magnetic inductive flowmeter and kept constant for a time period of 30 minutes to 1.5 hours depending on the simulated inflow rate. Different temperatures of lateral inflow were adjusted by heating the water in the tank (for summer experiments a cooling by ice cubes could be realized). With this setup, different proportions of lateral inflow to stream flow ranging from 2 to 20%, could be simulated for different ΔT's (2-7° C) between stream- and inflowing water. Results indicate that the estimation of groundwater discharge through DTS is working properly, but that the method is very sensitive to the determination of the PS groundwater temperature. The span of adjusted ΔT and inflow rates of the artificial system are currently used to perform a thorough uncertainty analysis of the DTS method and to derive thresholds for detection limits.

  2. Simulation of scattered fields: Some guidelines for the equivalent source method

    NASA Astrophysics Data System (ADS)

    Gounot, Yves J. R.; Musafir, Ricardo E.

    2011-07-01

    Three different approaches of the equivalent source method for simulating scattered fields are compared: two of them deal with monopole sets, the other with multipole expansions. In the first monopole approach, the sources have fixed positions given by specific rules, while in the second one (ESGA), the optimal positions are determined via a genetic algorithm. The 'pros and cons' of each of these approaches are discussed with the aim of providing practical guidelines for the user. It is shown that while both monopole techniques furnish quite good pressure field reconstructions with simple source arrangements, ESGA requires a number of monopoles significantly smaller and, with equal number of sources, yields a better precision. As for the multipole technique, the main advantage is that in principle any precision can be reached, provided the source order is sufficiently high. On the other hand, the results point out that the lack of rules for determining the proper multipole order necessary for a desired precision may constitute a handicap for the user.

  3. Absorbed dose calculations in a brachytherapy pelvic phantom using the Monte Carlo method

    PubMed Central

    Rodríguez, Miguel L.; deAlmeida, Carlos E.

    2002-01-01

    Monte Carlo calculations of the absorbed dose at various points of a brachytherapy anthropomorphic phantom are presented. The phantom walls and internal structures are made of polymethylmethacrylate and its external shape was taken from a female Alderson phantom. A complete Fletcher‐Green type applicator with the uterine tandem was fixed at the bottom of the phantom reproducing a typical geometrical configuration as that attained in a gynecological brachytherapy treatment. The dose rate produced by an array of five 137Cs CDC‐J type sources placed in the applicator colpostats and the uterine tandem was evaluated by Monte Carlo simulations using the code penelope at three points: point A, the rectum, and the bladder. The influence of the applicator in the dose rate was evaluated by comparing Monte Carlo simulations of the sources alone and the sources inserted in the applicator. Differences up to 56% in the dose may be observed for the two cases in the planes including the rectum and bladder. The results show a reduction of the dose of 15.6%, 14.0%, and 5.6% in the rectum, bladder, and point A respectively, when the applicator wall and shieldings are considered. PACS number(s): 87.53Jw, 87.53.Wz, 87.53.Vb, 87.66.Xa PMID:12383048

  4. Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks.

    PubMed

    Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik

    2009-01-01

    Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.

  5. Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks

    PubMed Central

    Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik

    2009-01-01

    Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569

  6. Change-point detection of induced and natural seismicity

    NASA Astrophysics Data System (ADS)

    Fiedler, B.; Holschneider, M.; Zoeller, G.; Hainzl, S.

    2016-12-01

    Earthquake rates are influenced by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic sources. While the first two sources can be well modeled due to the fact that the source is known, transient aseismic processes are more difficult to detect. However, the detection of the associated changes of the earthquake activity is of great interest, because it might help to identify natural aseismic deformation patterns (such as slow slip events) and the occurrence of induced seismicity related to human activities. We develop a Bayesian approach to detect change-points in seismicity data which are modeled by Poisson processes. By means of a Likelihood-Ratio-Test, we proof the significance of the change of the intensity. The model is also extended to spatiotemporal data to detect the area of the transient changes. The method is firstly tested for synthetic data and then applied to observational data from central US and the Bardarbunga volcano in Iceland.

  7. A novel solution for LED wall lamp design and simulation

    NASA Astrophysics Data System (ADS)

    Ge, Rui; Hong, Weibin; Li, Kuangqi; Liang, Pengxiang; Zhao, Fuli

    2014-11-01

    The model of the wall washer lamp and the practical illumination application have been established with a new design of the lens to meet the uniform illumination demand for wall washer lamp based on the Lambertian light sources. Our secondary optical design of freeform surface lens to LED wall washer lamp based on the conservation law of energy and Snell's law can improve the lighting effects as a uniform illumination. With the relationship between the surface of the lens and the surface of the target, a great number of discrete points of the freeform profile curve were obtained through the iterative method. After importing the data into our modeling program, the optical entity was obtained. Finally, to verify the feasibility of the algorithm, the model was simulated by specialized software, with both the LED Lambertian point source and LED panel source model.

  8. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  9. Singularity and Bohm criterion in hot positive ion species in the electronegative ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aslaninejad, Morteza; Yasserian, Kiomars

    2016-05-15

    The structure of the discharge for a magnetized electronegative ion source with two species of positive ions is investigated. The thermal motion of hot positive ions and the singularities involved with it are taken into account. By analytical solution of the neutral region, the location of the singular point and also the values of the plasma parameter such as electric potential and ion density at the singular point are obtained. A generalized Bohm criterion is recovered and discussed. In addition, for the non-neutral solution, the numerical method is used. In contrast with cold ion plasma, qualitative changes are observed. Themore » parameter space region within which oscillations in the density and potential can be observed has been scanned and discussed. The space charge behavior in the vicinity of edge of the ion sources has also been discussed in detail.« less

  10. Mesh-free distributed point source method for modeling viscous fluid motion between disks vibrating at ultrasonic frequency.

    PubMed

    Wada, Yuji; Kundu, Tribikram; Nakamura, Kentaro

    2014-08-01

    The distributed point source method (DPSM) is extended to model wave propagation in viscous fluids. Appropriate estimation on attenuation and boundary layer formation due to fluid viscosity is necessary for the ultrasonic devices used for acoustic streaming or ultrasonic levitation. The equations for DPSM modeling in viscous fluids are derived in this paper by decomposing the linearized viscous fluid equations into two components-dilatational and rotational components. By considering complex P- and S-wave numbers, the acoustic fields in viscous fluids can be calculated following similar calculation steps that are used for wave propagation modeling in solids. From the calculations reported the precision of DPSM is found comparable to that of the finite element method (FEM) for a fundamental ultrasonic field problem. The particle velocity parallel to the two bounding surfaces of the viscous fluid layer between two rigid plates (one in motion and one stationary) is calculated. The finite element results agree well with the DPSM results that were generated faster than the transient FEM results.

  11. Synthesis of nanocrystalline CeO{sub 2} particles by different emulsion methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supakanapitak, Sunisa; Boonamnuayvitaya, Virote; Jarudilokkul, Somnuk, E-mail: somnuk.jar@kmutt.ac.th

    2012-05-15

    Cerium oxide nanoparticles were synthesized using three different methods of emulsion: (1) reversed micelle (RM); (2) emulsion liquid membrane (ELM); and (3) colloidal emulsion aphrons (CEAs). Ammonium cerium nitrate and polyoxyethylene-4-lauryl ether (PE4LE) were used as cerium and surfactant sources in this study. The powder was calcined at 500 Degree-Sign C to obtain CeO{sub 2}. The effect of the preparation procedure on the particle size, surface area, and the morphology of the prepared powders were investigated. The obtained powders are highly crystalline, and nearly spherical in shape. The average particle size and the specific surface area of the powders frommore » the three methods were in the range of 4-10 nm and 5.32-145.73 m{sup 2}/g, respectively. The CeO{sub 2} powders synthesized by the CEAs are the smallest average particle size, and the highest surface area. Finally, the CeO{sub 2} prepared by the CEAs using different cerium sources and surfactant types were studied. It was found that the surface tensions of cerium solution and the type of surfactant affect the particle size of CeO{sub 2}. - Graphical Abstract: The emulsion droplet size distribution and the TEM images of CeO{sub 2} prepared by different methods: reversed micelle (RM), emulsion liquid membrane (ELM) and colloidal emulsion aphrons (CEAs). Highlights: Black-Right-Pointing-Pointer Nano-sized CeO{sub 2} was successfully prepared by three different emulsion methods. Black-Right-Pointing-Pointer The colloidal emulsion aphrons method producing CeO{sub 2} with the highest surface area. Black-Right-Pointing-Pointer The surface tensions of a cerium solution have slightly effect on the particle size. Black-Right-Pointing-Pointer The size control could be interpreted in terms of the adsorption of the surfactant.« less

  12. Opacity meter for monitoring exhaust emissions from non-stationary sources

    DOEpatents

    Dec, John Edward

    2000-01-01

    Method and apparatus for determining the opacity of exhaust plumes from moving emissions sources. In operation, a light source is activated at a time prior to the arrival of a diesel locomotive at a measurement point, by means of a track trigger switch or the Automatic Equipment Identification system, such that the opacity measurement is synchronized with the passage of an exhaust plume past the measurement point. A beam of light from the light source passes through the exhaust plume of the locomotive and is detected by a suitable detector, preferably a high-rate photodiode. The light beam is well-collimated and is preferably monochromatic, permitting the use of a narrowband pass filter to discriminate against background light. In order to span a double railroad track and provide a beam which is substantially stronger than background, the light source, preferably a diode laser, must provide a locally intense beam. A high intensity light source is also desirable in order to increase accuracy at the high sampling rates required. Also included is a computer control system useful for data acquisition, manipulation, storage and transmission of opacity data and the identification of the associated diesel engine to a central data collection center.

  13. CO2 mitigation potential of mineral carbonation with industrial alkalinity sources in the United States.

    PubMed

    Kirchofer, Abby; Becker, Austin; Brandt, Adam; Wilcox, Jennifer

    2013-07-02

    The availability of industrial alkalinity sources is investigated to determine their potential for the simultaneous capture and sequestration of CO2 from point-source emissions in the United States. Industrial alkalinity sources investigated include fly ash, cement kiln dust, and iron and steel slag. Their feasibility for mineral carbonation is determined by their relative abundance for CO2 reactivity and their proximity to point-source CO2 emissions. In addition, the available aggregate markets are investigated as possible sinks for mineral carbonation products. We show that in the U.S., industrial alkaline byproducts have the potential to mitigate approximately 7.6 Mt CO2/yr, of which 7.0 Mt CO2/yr are CO2 captured through mineral carbonation and 0.6 Mt CO2/yr are CO2 emissions avoided through reuse as synthetic aggregate (replacing sand and gravel). The emission reductions represent a small share (i.e., 0.1%) of total U.S. CO2 emissions; however, industrial byproducts may represent comparatively low-cost methods for the advancement of mineral carbonation technologies, which may be extended to more abundant yet expensive natural alkalinity sources.

  14. User's guide for RAM. Volume II. Data preparation and listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.B.; Novak, J.H.

    1978-11-01

    The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less

  15. Getting over Epistemology and Treating Theory as a Recyclable Source of "Things"

    ERIC Educational Resources Information Center

    Kusznirczuk, John

    2012-01-01

    This paper challenges the way in which we are inclined to treat theory and suggests that our tendency to privilege it over method is counterproductive. Some consequences of privileging theory are pointed out and a remedy is proposed. The remedy entails a number of "reversals" in the way we treat theory and method in maths education research, the…

  16. Designing a freeform optic for oblique illumination

    NASA Astrophysics Data System (ADS)

    Uthoff, Ross D.; Ulanch, Rachel N.; Williams, Kaitlyn E.; Ruiz Diaz, Liliana; King, Page; Koshel, R. John

    2017-11-01

    The Functional Freeform Fitting (F4) method is utilized to design a freeform optic for oblique illumination of Mark Rothko's Green on Blue (1956). Shown are preliminary results from an iterative freeform design process; from problem definition and specification development to surface fit, ray tracing results, and optimization. This method is applicable to both point and extended sources of various geometries.

  17. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  18. Pasture-scale methane emissions of grazing cattle

    USDA-ARS?s Scientific Manuscript database

    Grazing cattle are mobile point sources of methane and present challenges to quantify emissions using noninterfering micrometeorological methods. Stocking density is low and cattle can bunch up or disperse over a wide area, so knowing cattle locations is critical. The methane concentration downwind ...

  19. Watershed Management Tool for Selection and Spacial Allocation of Non-Point Source Pollution Control Practices

    EPA Science Inventory

    Distributed-parameter watershed models are often utilized for evaluating the effectiveness of sediment and nutrient abatement strategies through the traditional {calibrate→ validate→ predict} approach. The applicability of the method is limited due to modeling approximations. In ...

  20. Heavy metal transport in large river systems: heavy metal emissions and loads in the Rhine and Elbe river basins

    NASA Astrophysics Data System (ADS)

    Vink, Rona; Behrendt, Horst

    2002-11-01

    Pollutant transport and management in the Rhine and Elbe basins is still of international concern, since certain target levels set by the international committees for protection of both rivers have not been reached. The analysis of the chain of emissions of point and diffuse sources to river loads will provide policy makers with a tool for effective management of river basins. The analysis of large river basins such as the Elbe and Rhine requires information on the spatial and temporal characteristics of both emissions and physical information of the entire river basin. In this paper, an analysis has been made of heavy metal emissions from various point and diffuse sources in the Rhine and Elbe drainage areas. Different point and diffuse pathways are considered in the model, such as inputs from industry, wastewater treatment plants, urban areas, erosion, groundwater, atmospheric deposition, tile drainage, and runoff. In most cases the measured heavy metal loads at monitoring stations are lower than the sum of the heavy metal emissions. This behaviour in large river systems can largely be explained by retention processes (e.g. sedimentation) and is dependent on the specific runoff of a catchment. Independent of the method used to estimate emissions, the source apportionment analysis of observed loads was used to determine the share of point and diffuse sources in the heavy metal load at a monitoring station by establishing a discharge dependency. The results from both the emission analysis and the source apportionment analysis of observed loads were compared and gave similar results. Between 51% (for Hg) and 74% (for Pb) of the total transport in the Elbe basin is supplied by inputs from diffuse sources. In the Rhine basin diffuse source inputs dominate the total transport and deliver more than 70% of the total transport. The diffuse hydrological pathways with the highest share are erosion and urban areas.

  1. Three-dimensional displacement measurement of image point by point-diffraction interferometry

    NASA Astrophysics Data System (ADS)

    He, Xiao; Chen, Lingfeng; Meng, Xiaojie; Yu, Lei

    2018-01-01

    This paper presents a method for measuring the three-dimensional (3-D) displacement of an image point based on point-diffraction interferometry. An object Point-light-source (PLS) interferes with a fixed PLS and its interferograms are captured by an exit pupil. When the image point of the object PLS is slightly shifted to a new position, the wavefront of the image PLS changes. And its interferograms also change. Processing these figures (captured before and after the movement), the wavefront difference of the image PLS can be obtained and it contains the information of three-dimensional (3-D) displacement of the image PLS. However, the information of its three-dimensional (3-D) displacement cannot be calculated until the distance between the image PLS and the exit pupil is calibrated. Therefore, we use a plane-parallel-plate with a known refractive index and thickness to determine this distance, which is based on the Snell's law for small angle of incidence. Thus, since the distance between the exit pupil and the image PLS is a known quantity, the 3-D displacement of the image PLS can be simultaneously calculated through two interference measurements. Preliminary experimental results indicate that its relative error is below 0.3%. With the ability to accurately locate an image point (whatever it is real or virtual), a fiber point-light-source can act as the reticle by itself in optical measurement.

  2. SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X; Duan, J; Popple, R

    2014-06-01

    Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less

  3. Imaging Young Stellar Objects with VLTi/PIONIER

    NASA Astrophysics Data System (ADS)

    Kluska, J.; Malbet, F.; Berger, J.-P.; Benisty, M.; Lazareff, B.; Le Bouquin, J.-B.; Baron, F.; Dominik, C.; Isella, A.; Juhasz, A.; Kraus, S.; Lachaume, R.; Ménard, F.; Millan-Gabet, R.; Monnier, J.; Pinte, C.; Soulez, F.; Tallon, M.; Thi, W.-F.; Thiébaut, É.; Zins, G.

    2014-04-01

    Optical interferometry imaging is designed to help us to reveal complex astronomical sources without a prior model. Among these complex objects are the young stars and their environments, which have a typical morphology with a point-like source, surrounded by circumstellar material with unknown morphology. To image them, we have developed a numerical method that removes completely the stellar point source and reconstructs the rest of the image, using the differences in the spectral behavior between the star and its circumstellar material. We aim to reveal the first Astronomical Units of these objects where many physical phenomena could interplay: the dust sublimation causing a puffed-up inner rim, a dusty halo, a dusty wind or an inner gaseous component. To investigate more deeply these regions, we carried out the first Large Program survey of HAeBe stars with two main goals: statistics on the geometry of these objects at the first astronomical unit scale and imaging their very close environment. The images reveal the environment, which is not polluted by the star and allows us to derive the best fit for the flux ratio and the spectral slope. We present the first images from this survey and the application of the imaging method on other astronomical objects.

  4. Solution of the weighted symmetric similarity transformations based on quaternions

    NASA Astrophysics Data System (ADS)

    Mercan, H.; Akyilmaz, O.; Aydin, C.

    2017-12-01

    A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.

  5. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    NASA Astrophysics Data System (ADS)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  6. Automated Analysis of Renewable Energy Datasets ('EE/RE Data Mining')

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian; Elmore, Ryan; Getman, Dan

    This poster illustrates methods to substantially improve the understanding of renewable energy data sets and the depth and efficiency of their analysis through the application of statistical learning methods ('data mining') in the intelligent processing of these often large and messy information sources. The six examples apply methods for anomaly detection, data cleansing, and pattern mining to time-series data (measurements from metering points in buildings) and spatiotemporal data (renewable energy resource datasets).

  7. USSR and Eastern Europe Scientific Abstracts, Materials Science and Metallurgy, Number 45

    DTIC Science & Technology

    1977-05-11

    constants VQ and q. The values of the critical stress intensity factor produced by the authors by their indirect method are compared with...and TEREKHOV, A. N., Moscow Institute of Steel and Alloys [Russian abstract provided by the source] [Text] The method of high-temperature...their melting point. References 9; all Russian. USSR ’ UDC 539𔃽 IMPROVING THE PRECISION OF THE ACOUSTIC METHOD OF STRESS DETERMINATION Kiev

  8. a Low-Cost and Portable System for 3d Reconstruction of Texture-Less Objects

    NASA Astrophysics Data System (ADS)

    Hosseininaveh, A.; Yazdan, R.; Karami, A.; Moradi, M.; Ghorbani, F.

    2015-12-01

    The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.

  9. Effect of distance-related heterogeneity on population size estimates from point counts

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2009-01-01

    Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.

  10. A compiled catalog of rotation measures of radio point sources

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Han, Jin-Lin

    2014-08-01

    We compiled a catalog of Faraday rotation measures (RMs) for 4553 extragalactic radio point sources published in literature. These RMs were derived from multi-frequency polarization observations. The RM data are compared to those in the NRAO VLA Sky Survey (NVSS) RM catalog. We reveal a systematic uncertainty of about 10.0 ± 1.5 rad m-2 in the NVSS RM catalog. The Galactic foreground RM is calculated through a weighted averaging method by using the compiled RM catalog together with the NVSS RM catalog, with careful consideration of uncertainties in the RM data. The data from the catalog and the interface for the Galactic foreground RM calculations are publicly available on the webpage: http://zmtt.bao.ac.cn/RM/.

  11. Development and Characterization of a Laser-Induced Acoustic Desorption Source.

    PubMed

    Huang, Zhipeng; Ossenbrüggen, Tim; Rubinsky, Igor; Schust, Matthias; Horke, Daniel A; Küpper, Jochen

    2018-03-20

    A laser-induced acoustic desorption source, developed for use at central facilities, such as free-electron lasers, is presented. It features prolonged measurement times and a fixed interaction point. A novel sample deposition method using aerosol spraying provides a uniform sample coverage and hence stable signal intensity. Utilizing strong-field ionization as a universal detection scheme, the produced molecular plume is characterized in terms of number density, spatial extend, fragmentation, temporal distribution, translational velocity, and translational temperature. The effect of desorption laser intensity on these plume properties is evaluated. While translational velocity is invariant for different desorption laser intensities, pointing to a nonthermal desorption mechanism, the translational temperature increases significantly and higher fragmentation is observed with increased desorption laser fluence.

  12. Optimization of Focusing by Strip and Pixel Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, G J; White, D A; Thompson, C A

    Professor Kevin Webb and students at Purdue University have demonstrated the design of conducting strip and pixel arrays for focusing electromagnetic waves [1, 2]. Their key point was to design structures to focus waves in the near field using full wave modeling and optimization methods for design. Their designs included arrays of conducting strips optimized with a downhill search algorithm and arrays of conducting and dielectric pixels optimized with the iterative direct binary search method. They used a finite element code for modeling. This report documents our attempts to duplicate and verify their results. We have modeled 2D conducting stripsmore » and both conducting and dielectric pixel arrays with moment method and FDTD codes to compare with Webb's results. New designs for strip arrays were developed with optimization by the downhill simplex method with simulated annealing. Strip arrays were optimized to focus an incident plane wave at a point or at two separated points and to switch between focusing points with a change in frequency. We also tried putting a line current source at the focus point for the plane wave to see how it would work as a directive antenna. We have not tried optimizing the conducting or dielectric pixel arrays, but modeled the structures designed by Webb with the moment method and FDTD to compare with the Purdue results.« less

  13. Source Region Modeling of Explosions 2 and 3 from the Source Physics Experiment Using the Rayleigh Integral Method

    NASA Astrophysics Data System (ADS)

    Jones, K. R.; Arrowsmith, S.; Whitaker, R. W.

    2012-12-01

    The overall mission of the National Center for Nuclear Security (NCNS) Source Physics Experiment at the National Nuclear Security Site (SPE-N) near Las Vegas, Nevada is to improve upon and develop new physics based models for underground nuclear explosions using scaled, underground chemical explosions as proxies. To this end, we use the Rayleigh integral as an approximation to the Helmholz-Kirchoff integral, [Whitaker, 2007 and Arrowsmith et al., 2011], to model infrasound generation in the far-field. Infrasound generated by single-point explosive sources above ground can typically be treated as monopole point-sources. While the source is relatively simple, the research needed to model above ground point-sources is complicated by path effects related to the propagation of the acoustic signal and out of the scope of this study. In contrast, for explosions that occur below ground, including the SPE explosions, the source region is more complicated but the observation distances are much closer (< 5 km), thus greatly reducing the complication of path effects. In this case, elastic energy from the explosions radiates upward and spreads out, depending on depth, to a more distributed region at the surface. Due to this broad surface perturbation of the atmosphere we cannot model the source as a simple monopole point-source. Instead, we use the analogy of a piston mounted in a rigid, infinite baffle, where the surface area that moves as a result of the explosion is the piston and the surrounding region is the baffle. The area of the "piston" is determined by the depth and explosive yield of the event. In this study we look at data from SPE-N-2 and SPE-N-3. Both shots had an explosive yield of 1 ton at a depth of 45 m. We collected infrasound data with up to eight stations and 32 sensors within a 5 km radius of ground zero. To determine the area of the surface acceleration, we used data from twelve surface accelerometers installed within 100 m radially about ground zero. With the accelerometer data defining the vertical motion of the surface, we use the Rayleigh Integral Method, [Whitaker, 2007 and Arrowsmith et al., 2011], to generate a synthetic infrasound pulse to compare to the observed data. Because the phase across the "piston" is not necessarily uniform, constructive and destructive interference will change the shape of the acoustic pulse if observed directly above the source (on-axis) or perpendicular to the source (off-axis). Comparing the observed data to the synthetic data we note that the overall structure of the pulse agrees well and that the differences can be attributed to a number of possibilities, including the sensors used, topography, meteorological conditions, etc. One other potential source of error between the observed and calculated data is that we use a flat, symmetric source region for the "piston" where in reality the source region is not flat and not perfectly symmetric. A primary goal of this work is to better understand and model the relationships between surface area, depth, and yield of underground explosions.

  14. Interferometry with flexible point source array for measuring complex freeform surface and its design algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo

    2018-06-01

    The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.

  15. Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: A Comparative Analysis.

    PubMed

    Sharpe, J Danielle; Hopkins, Richard S; Cook, Robert L; Striley, Catherine W

    2016-10-20

    Traditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods. The objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter. Publicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC's change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package "bcp" version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia. During the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%. Of the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed.

  16. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  17. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE PAGES

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...

    2016-07-29

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  18. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  19. Sources and methods to reconstruct past masting patterns in European oak species.

    PubMed

    Szabó, Péter

    2012-01-01

    The irregular occurrence of good seed years in forest trees is known in many parts of the world. Mast year frequency in the past few decades can be examined through field observational studies; however, masting patterns in the more distant past are equally important in gaining a better understanding of long-term forest ecology. Past masting patterns can be studied through the examination of historical written sources. These pose considerable challenges, because data in them were usually not recorded with the aim of providing information about masting. Several studies examined masting in the deeper past, however, authors hardly ever considered the methodological implications of using and combining various source types. This paper provides a critical overview of the types of archival written that are available for the reconstruction of past masting patterns for European oak species and proposes a method to unify and evaluate different types of data. Available sources cover approximately eight centuries and can be put into two basic categories: direct observations on the amount of acorns and references to sums of money received in exchange for access to acorns. Because archival sources are highly different in origin and quality, the optimal solution for creating databases for past masting data is a three-point scale: zero mast, moderate mast, good mast. When larger amounts of data are available in a unified three-point-scale database, they can be used to test hypotheses about past masting frequencies, the driving forces of masting or regional masting patterns.

  20. Non-contact local temperature measurement inside an object using an infrared point detector

    NASA Astrophysics Data System (ADS)

    Hisaka, Masaki

    2017-04-01

    Local temperature measurement in deep areas of objects is an important technique in biomedical measurement. We have investigated a non-contact method for measuring temperature inside an object using a point detector for infrared (IR) light. An IR point detector with a pinhole was constructed and the radiant IR light emitted from the local interior of the object is photodetected only at the position of pinhole located in imaging relation. We measured the thermal structure of the filament inside the miniature bulb using the IR point detector, and investigated the temperature dependence at approximately human body temperature using a glass plate positioned in front of the heat source.

  1. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  2. GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, M.

    1959-06-01

    GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less

  3. Time-reversal transcranial ultrasound beam focusing using a k-space method

    PubMed Central

    Jing, Yun; Meral, F. Can; Clement, Greg. T.

    2012-01-01

    This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477

  4. Source apportionment of nitrogen and phosphorus from non-point source pollution in Nansi Lake Basin, China.

    PubMed

    Zhang, Bao-Lei; Cui, Bo-Hao; Zhang, Shu-Min; Wu, Quan-Yuan; Yao, Lei

    2018-05-03

    Nitrogen (N) and phosphorus (P) from non-point source (NPS) pollution in Nansi Lake Basin greatly influenced the water quality of Nansi Lake, which is the determinant factor for the success of East Route of South-North Water Transfer Project in China. This research improved Johnes export coefficient model (ECM) by developing a method to determine the export coefficients of different land use types based on the hydrological and water quality data. Taking NPS total nitrogen (TN) and total phosphorus (TP) as the study objects, this study estimated the contributions of different pollution sources and analyzed their spatial distributions based on the improved ECM. The results underlined that the method for obtaining output coefficients of land use types using hydrology and water quality data is feasible and accurate, and is suitable for the study of NPS pollution at large-scale basins. The average output structure of NPS TN from land use, rural breeding and rural life is 33.6, 25.9, and 40.5%, and the NPS TP is 31.6, 43.7, and 24.7%, respectively. Especially, dry land was the main land use source for both NPS TN and TP pollution, with the contributed proportions of 81.3 and 81.8% respectively. The counties of Zaozhuang, Tengzhou, Caoxian, Yuncheng, and Shanxian had higher contribution rates and the counties of Dingtao, Juancheng, and Caoxian had the higher load intensities for both NPS TN and TP pollution. The results of this study allowed for an improvement in the understanding of the pollution source contribution and enabled researchers and planners to focus on the most important sources and regions of NPS pollution.

  5. Gaia Data Release 1. Pre-processing and source list creation

    NASA Astrophysics Data System (ADS)

    Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.

    2016-11-01

    Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.

  6. 40 CFR 412.37 - Additional measures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... STANDARDS CONCENTRATED ANIMAL FEEDING OPERATIONS (CAFO) POINT SOURCE CATEGORY Dairy Cows and Cattle Other... application; (4) Test methods used to sample and analyze manure, litter, process waste water, and soil; (5) Results from manure, litter, process waste water, and soil sampling; (6) Explanation of the basis for...

  7. 40 CFR 412.37 - Additional measures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... STANDARDS CONCENTRATED ANIMAL FEEDING OPERATIONS (CAFO) POINT SOURCE CATEGORY Dairy Cows and Cattle Other... application; (4) Test methods used to sample and analyze manure, litter, process waste water, and soil; (5) Results from manure, litter, process waste water, and soil sampling; (6) Explanation of the basis for...

  8. 40 CFR 412.37 - Additional measures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... STANDARDS CONCENTRATED ANIMAL FEEDING OPERATIONS (CAFO) POINT SOURCE CATEGORY Dairy Cows and Cattle Other... application; (4) Test methods used to sample and analyze manure, litter, process waste water, and soil; (5) Results from manure, litter, process waste water, and soil sampling; (6) Explanation of the basis for...

  9. A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...

  10. Using Lunar Observations to Validate In-Flight Calibrations of Clouds and Earth Radiant Energy System Instruments

    NASA Technical Reports Server (NTRS)

    Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-01-01

    The validation of in-orbit instrument performance requires stability in both instrument and calibration source. This paper describes a method of validation using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. Unlike internal calibrations, the Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, in-orbit observations have become standardized and compiled for the Flight Models-1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance parameters which can be gleaned are detector gain, pointing accuracy and static detector point response function validation. Lunar observations are used to examine the stability of all three detectors on each of these instruments from 2006 to present. This validation method has yielded results showing trends per CERES data channel of 1.2% per decade or less.

  11. The Sedov Blast Wave as a Radial Piston Verification Test

    DOE PAGES

    Pederson, Clark; Brown, Bart; Morgan, Nathaniel

    2016-06-22

    The Sedov blast wave is of great utility as a verification problem for hydrodynamic methods. The typical implementation uses an energized cell of finite dimensions to represent the energy point source. We avoid this approximation by directly finding the effects of the energy source as a boundary condition (BC). Furthermore, the proposed method transforms the Sedov problem into an outward moving radial piston problem with a time-varying velocity. A portion of the mesh adjacent to the origin is removed and the boundaries of this hole are forced with the velocities from the Sedov solution. This verification test is implemented onmore » two types of meshes, and convergence is shown. Our results from the typical initial condition (IC) method and the new BC method are compared.« less

  12. Near-field noise prediction for aircraft in cruising flight: Methods manual. [laminar flow control noise effects analysis

    NASA Technical Reports Server (NTRS)

    Tibbetts, J. G.

    1979-01-01

    Methods for predicting noise at any point on an aircraft while the aircraft is in a cruise flight regime are presented. Developed for use in laminar flow control (LFC) noise effects analyses, they can be used in any case where aircraft generated noise needs to be evaluated at a location on an aircraft while under high altitude, high speed conditions. For each noise source applicable to the LFC problem, a noise computational procedure is given in algorithm format, suitable for computerization. Three categories of noise sources are covered: (1) propulsion system, (2) airframe, and (3) LFC suction system. In addition, procedures are given for noise modifications due to source soundproofing and the shielding effects of the aircraft structure wherever needed. Sample cases, for each of the individual noise source procedures, are provided to familiarize the user with typical input and computed data.

  13. The Chandra Source Catalog 2.0: Combining Data for Processing (or How I learned 17 different words for "group")

    NASA Astrophysics Data System (ADS)

    Hain, Roger; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    The Second Chandra Source Catalog (CSC2.0) combines data at multiple stages to improve detection efficiency, enhance source region identification, and match observations of the same celestial source taken with significantly different point spread functions on Chandra's detectors. The need to group data for different reasons at different times in processing results in a hierarchy of groups to which individual sources belong. Source data are initially identified as belonging to each Chandra observation ID and number (an "obsid"). Data from each obsid whose pointings are within sixty arcseconds of each other are reprojected to the same aspect reference coordinates and grouped into stacks. Detection is performed on all data in the same stack, and individual sources are identified. Finer source position and region data are determined by further processing sources whose photons may be commingled together, grouping such sources into bundles. Individual stacks which overlap to any extent are grouped into ensembles, and all stacks in the same ensemble are later processed together to identify master sources and determine their properties.We discuss the basis for the various methods of combining data for processing and precisely define how the groups are determined. We also investigate some of the issues related to grouping data and discuss what options exist and how groups have evolved from prior releases.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  14. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  15. Stable Extraction of Threshold Voltage Using Transconductance Change Method for CMOS Modeling, Simulation and Characterization

    NASA Astrophysics Data System (ADS)

    Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook

    2004-04-01

    We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.

  16. Measurement system of the refractive power of spherical and sphero-cylindrical lenses with the magnification ellipse fitting method.

    PubMed

    Ko, Wooseok; Kim, Soohyun

    2009-11-01

    This paper proposes a new measurement system for measuring the refractive power of spherical and sphero-cylindrical lenses with a six-point light source, which is composed of a light emitting diode and a six-hole pattern aperture, and magnification ellipse fitting method. The position of the six light sources is changed into a circular or elliptical form subjected to the lens refractive power and meridian rotation angle. The magnification ellipse fitting method calculates the lens refractive power based on the ellipse equation with magnifications that are the ratios between initial diagonal lengths and measured diagonal lengths of the conjugated light sources changed by the target lens. The refractive powers of the spherical and sphero-cylindrical lenses certified in the Korea Research Institute of Standard and Science were measured to verify the measurement performance. The proposed method is estimated to have a repeatability of +/-0.01 D and an error value below 1%.

  17. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms.

    PubMed

    Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges

    2009-02-13

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  18. An in vitro verification of strength estimation for moving an 125I source during implantation in brachytherapy.

    PubMed

    Tanaka, Kenichi; Kajimoto, Tsuyoshi; Hayashi, Takahiro; Asanuma, Osamu; Hori, Masakazu; Kamo, Ken-Ichi; Sumida, Iori; Takahashi, Yutaka; Tateoka, Kunihiko; Bengua, Gerard; Sakata, Koh-Ichi; Endo, Satoru

    2018-04-11

    This study aims to demonstrate the feasibility of a method for estimating the strength of a moving brachytherapy source during implantation in a patient. Experiments were performed under the same conditions as in the actual treatment, except for one point that the source was not implanted into a patient. The brachytherapy source selected for this study was 125I with an air kerma strength of 0.332 U (μGym2h-1), and the detector used was a plastic scintillator with dimensions of 10 cm × 5 cm × 5 cm. A calibration factor to convert the counting rate of the detector to the source strength was measured and then the accuracy of the proposed method was investigated for a manually driven source. The accuracy was found to be under 10% when the shielding effect of additional needles for implantation at other positions was corrected, and about 30% when the shielding was not corrected. Even without shielding correction, the proposed method can detect dead/dropped source, implantation of a source with the wrong strength, and a mistake in the number of the sources implanted. Furthermore, when the correction was applied, the achieved accuracy came close to within 7% required to find the Oncoseed 6711 (125I seed with unintended strength among the commercially supplied values of 0.392, 0.462 and 0.533 U).

  19. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-08

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.

  20. Rethinking moment tensor inversion methods to retrieve the source mechanisms of low-frequency seismic events

    NASA Astrophysics Data System (ADS)

    Karl, S.; Neuberg, J.

    2011-12-01

    Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.

  1. Comparative Measurements of Radon Concentration in Soil Using Passive and Active Methods in High Level Natural Radiation Area (HLNRA) of Ramsar

    PubMed Central

    Amanat, B; Kardan, M R; Faghihi, R; Hosseini Pooya, S M

    2013-01-01

    Background: Radon and its daughters are amongst the most important sources of natural exposure in the world. Soil is one of the significant sources of radon/thoron due to both radium and thorium so that the emanated thoron from it may cause increased uncertainties in radon measurements. Recently, a diffusion chamber has been designed and optimized for passive discriminative measurements of radon/thoron concentrations in soil. Objective: In order to evaluate the capability of the passive method, some comparative measurements (with active methods) have been performed. Method: The method is based upon measurements by a diffusion chamber, including two Lexan polycarbonate SSNTDs, which can discriminate the emanated radon/thorn from the soil by delay method. The comparative measurements have been done in ten selected points of HLNRA of Ramsar in Iran. The linear regression and correlation between the results of two methods have been studied. Results: The results show that the radon concentrations are within the range of 12.1 to 165 kBq/m3 values. The correlation between the results of active and passive methods was measured by 0.99 value. As well, the thoron concentrations have been measured between 1.9 to 29.5 kBq/m3 values at the points. Conclusion: The sensitivity as well as the strong correlation with active measurements shows that the new low-cost passive method is appropriate for accurate seasonal measurements of radon and thoron concentration in soil. PMID:25505760

  2. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    PubMed

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  3. Active AirCore Sampling: Constraining Point Sources of Methane and Other Gases with Fixed Wing Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Bent, J. D.; Sweeney, C.; Tans, P. P.; Newberger, T.; Higgs, J. A.; Wolter, S.

    2017-12-01

    Accurate estimates of point source gas emissions are essential for reconciling top-down and bottom-up greenhouse gas measurements, but sampling such sources is challenging. Remote sensing methods are limited by resolution and cloud cover; aircraft methods are limited by air traffic control clearances, and the need to properly determine boundary layer height. A new sampling approach leverages the ability of unmanned aerial systems (UAS) to measure all the way to the surface near the source of emissions, improving sample resolution, and reducing the need to characterize a wide downstream swath, or measure to the full height of the planetary boundary layer (PBL). The "Active-AirCore" sampler, currently under development, will fly on a fixed wing UAS in Class G airspace, spiraling from the surface to 1200 ft AGL around point sources such as leaking oil wells to measure methane, carbon dioxide and carbon monoxide. The sampler collects a 100-meter long sample "core" of air in an 1/8" passivated stainless steel tube. This "core" is run on a high-precision instrument shortly after the UAS is recovered. Sample values are mapped to a specific geographic location by cross-referencing GPS and flow/pressure metadata, and fluxes are quantified by applying Gauss's theorem to the data, mapped onto the spatial "cylinder" circumscribed by the UAS. The AirCore-Active builds off the sampling ability and analytical approach of the related AirCore sampler, which profiles the atmosphere passively using a balloon launch platform, but will add an active pumping capability needed for near-surface horizontal sampling applications. Here, we show design elements, laboratory and field test results for methane, describe the overall goals of the mission, and discuss how the platform can be adapted, with minimal effort, to measure other gas species.

  4. A fast marching algorithm for the factored eikonal equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC

    The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less

  5. Speech-Message Extraction from Interference Introduced by External Distributed Sources

    NASA Astrophysics Data System (ADS)

    Kanakov, V. A.; Mironov, N. A.

    2017-08-01

    The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.

  6. Skyshine line-beam response functions for 20- to 100-MeV photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockhoff, R.C.; Shultis, J.K.; Faw, R.E.

    1996-06-01

    The line-beam response function, needed for skyshine analyses based on the integral line-beam method, was evaluated with the MCNP Monte Carlo code for photon energies from 20 to 100 MeV and for source-to-detector distances out to 1,000 m. These results are compared with point-kernel results, and the effects of bremsstrahlung and positron transport in the air are found to be important in this energy range. The three-parameter empirical formula used in the integral line-beam skyshine method was fit to the MCNP results, and values of these parameters are reported for various source energies and angles.

  7. Interpolating precipitation and its relation to runoff and non-point source pollution.

    PubMed

    Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L

    2005-01-01

    When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.

  8. A method for separation of heavy metal sources in urban groundwater using multiple lines of evidence.

    PubMed

    Hepburn, Emily; Northway, Anne; Bekele, Dawit; Liu, Gang-Jun; Currell, Matthew

    2018-06-11

    Determining sources of heavy metals in soils, sediments and groundwater is important for understanding their fate and transport and mitigating human and environmental exposures. Artificially imported fill, natural sediments and groundwater from 240 ha of reclaimed land at Fishermans Bend in Australia, were analysed for heavy metals and other parameters to determine the relative contributions from different possible sources. Fishermans Bend is Australia's largest urban re-development project, however, complicated land-use history, geology, and multiple contamination sources pose challenges to successful re-development. We developed a method for heavy metal source separation in groundwater using statistical categorisation of the data, analysis of soil leaching values and fill/sediment XRF profiling. The method identified two major sources of heavy metals in groundwater: 1. Point sources from local or up-gradient groundwater contaminated by industrial activities and/or legacy landfills; and 2. contaminated fill, where leaching of Cu, Mn, Pb and Zn was observed. Across the precinct, metals were most commonly sourced from a combination of these sources; however, eight locations indicated at least one metal sourced solely from fill leaching, and 23 locations indicated at least one metal sourced solely from impacted groundwater. Concentrations of heavy metals in groundwater ranged from 0.0001 to 0.003 mg/L (Cd), 0.001-0.1 mg/L (Cr), 0.001-0.2 mg/L (Cu), 0.001-0.5 mg/L (Ni), 0.001-0.01 mg/L (Pb), and 0.005-1.2 mg/L (Zn). Our method can determine the likely contribution of different metal sources to groundwater, helping inform more detailed contamination assessments and precinct-wide management and remediation strategies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. X-Ray Diffraction Wafer Mapping Method for Rhombohedral Super-Hetero-Epitaxy

    NASA Technical Reports Server (NTRS)

    Park, Yoonjoon; Choi, Sang Hyouk; King, Glen C.; Elliott, James R.; Dimarcantonio, Albert L.

    2010-01-01

    A new X-ray diffraction (XRD) method is provided to acquire XY mapping of the distribution of single crystals, poly-crystals, and twin defects across an entire wafer of rhombohedral super-hetero-epitaxial semiconductor material. In one embodiment, the method is performed with a point or line X-ray source with an X-ray incidence angle approximating a normal angle close to 90 deg, and in which the beam mask is preferably replaced with a crossed slit. While the wafer moves in the X and Y direction, a narrowly defined X-ray source illuminates the sample and the diffracted X-ray beam is monitored by the detector at a predefined angle. Preferably, the untilted, asymmetric scans are of {440} peaks, for twin defect characterization.

  10. 40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...

  11. 40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...

  12. 40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...

  13. Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.

    PubMed

    Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai

    2016-02-01

    The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.

  14. Additional adjoint Monte Carlo studies of the shielding of concrete structures against initial gamma radiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.; Cohen, M.O.

    1975-02-01

    The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less

  15. Statistical interpretation of pollution data from satellites. [for levels distribution over metropolitan area

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Green, R. N.; Young, G. R.

    1974-01-01

    The NIMBUS-G environmental monitoring satellite has an instrument (a gas correlation spectrometer) onboard for measuring the mass of a given pollutant within a gas volume. The present paper treats the problem: How can this type measurement be used to estimate the distribution of pollutant levels in a metropolitan area. Estimation methods are used to develop this distribution. The pollution concentration caused by a point source is modeled as a Gaussian plume. The uncertainty in the measurements is used to determine the accuracy of estimating the source strength, the wind velocity, diffusion coefficients and source location.

  16. Injector Beam Dynamics for a High-Repetition Rate 4th-Generation Light Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papadopoulos, C. F.; Corlett, J.; Emma, P.

    2013-05-20

    We report on the beam dynamics studies and optimization methods for a high repetition rate (1 MHz) photoinjector based on a VHF normal conducting electron source. The simultaneous goals of beamcompression and reservation of 6-dimensional beam brightness have to be achieved in the injector, in order to accommodate a linac driven FEL light source. For this, a parallel, multiobjective optimization algorithm is used. We discuss the relative merits of different injector design points, as well as the constraints imposed on the beam dynamics by technical considerations such as the high repetition rate.

  17. Method for reducing energy losses in laser crystals

    DOEpatents

    Atherton, L.J.; DeYoreo, J.J.; Roberts, D.H.

    1992-03-24

    A process for reducing energy losses in crystals is disclosed which comprises: a. heating a crystal to a temperature sufficiently high as to cause dissolution of microscopic inclusions into the crystal, thereby converting said inclusions into point-defects, and b. maintaining said crystal at a given temperature for a period of time sufficient to cause said point-defects to diffuse out of said crystal. Also disclosed are crystals treated by the process, and lasers utilizing the crystals as a source of light. 12 figs.

  18. Method for reducing energy losses in laser crystals

    DOEpatents

    Atherton, L. Jeffrey; DeYoreo, James J.; Roberts, David H.

    1992-01-01

    A process for reducing energy losses in crystals is disclosed which comprises: a. heating a crystal to a temperature sufficiently high as to cause dissolution of microscopic inclusions into the crystal, thereby converting said inclusions into point-defects, and b. maintaining said crystal at a given temperature for a period of time sufficient to cause said point-defects to diffuse out of said crystal. Also disclosed are crystals treated by the process, and lasers utilizing the crystals as a source of light.

  19. Power-output regularization in global sound equalization.

    PubMed

    Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn

    2008-01-01

    The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.

  20. Accuracy analysis of pointing control system of solar power station

    NASA Technical Reports Server (NTRS)

    Hung, J. C.; Peebles, P. Z., Jr.

    1978-01-01

    The first-phase effort concentrated on defining the minimum basic functions that the retrodirective array must perform, identifying circuits that are capable of satisfying the basic functions, and looking at some of the error sources in the system and how they affect accuracy. The initial effort also examined three methods for generating torques for mechanical antenna control, performed a rough analysis of the flexible body characteristics of the solar collector, and defined a control system configuration for mechanical pointing control of the array.

  1. A multiwave range test for obstacle reconstructions with unknown physical properties

    NASA Astrophysics Data System (ADS)

    Potthast, Roland; Schulz, Jochen

    2007-08-01

    We develop a new multiwave version of the range test for shape reconstruction in inverse scattering theory. The range test [R. Potthast, et al., A `range test' for determining scatterers with unknown physical properties, Inverse Problems 19(3) (2003) 533-547] has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for only one plane wave is given. Here, we extend the method to the case of multiple waves and show that the full shape of the unknown scatterer can be reconstructed. We further will clarify the relation between the range test methods, the potential method [A. Kirsch, R. Kress, On an integral equation of the first kind in inverse acoustic scattering, in: Inverse Problems (Oberwolfach, 1986), Internationale Schriftenreihe zur Numerischen Mathematik, vol. 77, Birkhauser, Basel, 1986, pp. 93-102] and the singular sources method [R. Potthast, Point sources and multipoles in inverse scattering theory, Habilitation Thesis, Gottingen, 1999]. In particular, we propose a new version of the Kirsch-Kress method using the range test and a new approach to the singular sources method based on the range test and potential method. Numerical examples of reconstructions for all four methods are provided.

  2. Evaluation of a stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Hydrologic models are essential tools for environmental assessment of agricultural non-point source pollution. The automatic calibration of hydrologic models, though efficient, demands significant computational power, which can limit its application. The study objective was to investigate a cost e...

  3. Surface‐wave Green’s tensors in the near field

    USGS Publications Warehouse

    Haney, Matt; Nakahara, Hisashi

    2014-01-01

    We demonstrate the connection between theoretical expressions for the correlation of ambient noise Rayleigh and Love waves and the exact surface‐wave Green’s tensors for a point force. The surface‐wave Green’s tensors are well known in the far‐field limit. On the other hand, the imaginary part of the exact Green’s tensors, including near‐field effects, arises in correlation techniques such as the spatial autocorrelation (SPAC) method. Using the imaginary part of the exact Green’s tensors from the SPAC method, we find the associated real part using the Kramers–Kronig relations. The application of the Kramers–Kronig relations is not straightforward, however, because the causality properties of the different tensor components vary. In addition to the Green’s tensors for a point force, we also derive expressions for a general point moment tensor source.

  4. Development and application of a reactive plume-in-grid model: evaluation over Greater Paris

    NASA Astrophysics Data System (ADS)

    Korsakissok, I.; Mallet, V.

    2010-09-01

    Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations on measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Ozone is mostly sensitive to the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.

  5. Investigation of Finite Sources through Time Reversal

    NASA Astrophysics Data System (ADS)

    Kremers, Simon; Brietzke, Gilbert; Igel, Heiner; Larmat, Carene; Fichtner, Andreas; Johnson, Paul A.; Huang, Lianjie

    2010-05-01

    Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the hypocenter and other information might be inferred. In this study, the backward propagation is performed numerically using a parallel cartesian spectral element code. Initial tests using point source moment tensors serve as control for the adaptability of the used wave propagation algorithm. After that we investigated the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, rupture velocity etc.). We used synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice-rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of various assumptions made on the source (e.g., origin time, hypocenter, fault location, etc.), adjoint source weighting (e.g., correct for epicentral distance) and structure (uncertainty in the velocity model) on the results of the time reversal process. We give an overview about the quality of focussing of the different wavefield properties (i.e., displacements, strains, rotations, energies). Additionally, the potential to recover source properties of multiple point sources at the same time is discussed.

  6. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sourcesmore » with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.« less

  7. VOXES: a high precision X-ray spectrometer for diffused sources with HAPG crystals in the 2–20 keV range

    NASA Astrophysics Data System (ADS)

    Scordo, A.; Curceanu, C.; Miliucci, M.; Shi, H.; Sirghi, F.; Zmeskal, J.

    2018-04-01

    Bragg spectroscopy is one of the best established experimental methods for high energy resolution X-ray measurements and has been widely used in several fields, going from fundamental physics to quantum mechanics tests, synchrotron radiation and X-FEL applications, astronomy, medicine and industry. However, this technique is limited to the measurement of photons produced from well collimated or point-like sources and becomes quite inefficient for photons coming from extended and diffused sources like those, for example, emitted in the exotic atoms radiative transitions. The VOXES project's goal is to realise a prototype of a high resolution and high precision X-ray spectrometer, using Highly Annealed Pyrolitic Graphite (HAPG) crystals in the Von Hamos configuration, working also for extended sources. The aim is to deliver a cost effective system having an energy resolution at the level of eV for X-ray energies from about 2 keV up to tens of keV, able to perform sub-eV precision measurements with non point-like sources. In this paper, the working principle of VOXES, together with first results, are presented.

  8. NPTFit: A Code Package for Non-Poissonian Template Fitting

    NASA Astrophysics Data System (ADS)

    Mishra-Sharma, Siddharth; Rodd, Nicholas L.; Safdi, Benjamin R.

    2017-06-01

    We present NPTFit, an open-source code package, written in Python and Cython, for performing non-Poissonian template fits (NPTFs). The NPTF is a recently developed statistical procedure for characterizing the contribution of unresolved point sources (PSs) to astrophysical data sets. The NPTF was first applied to Fermi gamma-ray data to provide evidence that the excess of ˜GeV gamma-rays observed in the inner regions of the Milky Way likely arises from a population of sub-threshold point sources, and the NPTF has since found additional applications studying sub-threshold extragalactic sources at high Galactic latitudes. The NPTF generalizes traditional astrophysical template fits to allow for the ability to search for populations of unresolved PSs that may follow a given spatial distribution. NPTFit builds upon the framework of the fluctuation analyses developed in X-ray astronomy, thus it likely has applications beyond those demonstrated with gamma-ray data. The NPTFit package utilizes novel computational methods to perform the NPTF efficiently. The code is available at http://github.com/bsafdi/NPTFit and up-to-date and extensive documentation may be found at http://nptfit.readthedocs.io.

  9. Non-domestic phosphorus release in rivers during low-flow: Mechanisms and implications for sources identification

    NASA Astrophysics Data System (ADS)

    Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael

    2018-05-01

    A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.

  10. Calculation and analysis of the non-point source pollution in the upstream watershed of the Panjiakou Reservoir, People's Republic of China

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Tang, L.

    2007-05-01

    Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.

  11. Comparison of Point Cloud Registration Algorithms for Better Result Assessment - Towards AN Open-Source Solution

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2018-05-01

    Terrestrial and airborne laser scanning, photogrammetry and more generally 3D recording techniques are used in a wide range of applications. After recording several individual 3D datasets known in local systems, one of the first crucial processing steps is the registration of these data into a common reference frame. To perform such a 3D transformation, commercial and open source software as well as programs from the academic community are available. Due to some lacks in terms of computation transparency and quality assessment in these solutions, it has been decided to develop an open source algorithm which is presented in this paper. It is dedicated to the simultaneous registration of multiple point clouds as well as their georeferencing. The idea is to use this algorithm as a start point for further implementations, involving the possibility of combining 3D data from different sources. Parallel to the presentation of the global registration methodology which has been employed, the aim of this paper is to confront the results achieved this way with the above-mentioned existing solutions. For this purpose, first results obtained with the proposed algorithm to perform the global registration of ten laser scanning point clouds are presented. An analysis of the quality criteria delivered by two selected software used in this study and a reflexion about these criteria is also performed to complete the comparison of the obtained results. The final aim of this paper is to validate the current efficiency of the proposed method through these comparisons.

  12. Investigating the Accuracy of Point Clouds Generated for Rock Surfaces

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.

    2016-12-01

    Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.

  13. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    NASA Astrophysics Data System (ADS)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  14. In vivo quantitative imaging of point-like bioluminescent and fluorescent sources: Validation studies in phantoms and small animals post mortem

    NASA Astrophysics Data System (ADS)

    Comsa, Daria Craita

    2008-10-01

    There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.

  15. Active control on high-order coherence and statistic characterization on random phase fluctuation of two classical point sources.

    PubMed

    Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan

    2016-03-29

    Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.

  16. Longitudinal patterns of youth access to cigarettes and smoking progression: Minnesota Adolescent Community Cohort (MACC) study (2000 – 2003)

    PubMed Central

    Widome, Rachel; Forster, Jean L.; Hannan, Peter J.; Perry, Cheryl L.

    2008-01-01

    OBJECTIVES To measure community-level changes in the methods youth use to obtain cigarettes over time and to relate these methods to the progression of smoking. METHODS We analyzed 2000-2003 data from the Minnesota Adolescent Community Cohort study, where youth (beginning at age 12), who were living in Minnesota at baseline, were surveyed every six months via telephone. We conducted mixed model repeated measures logistic regression to obtain probabilities of cigarette access methods among past 30-day smokers (n = 340 at baseline). RESULTS The probability of obtaining cigarettes from a commercial source in the past month declined from 0.36 at baseline to 0.22 at the sixth survey point while the probability of obtaining cigarettes from a social source during the previous month increased from 0.54 to 0.76 (p for both trends = 0.0001). At the community level, the likelihood of adolescents obtaining cigarettes from social sources was inversely related to the likelihood of progressing to heavy smoking (p < 0.001). CONCLUSIONS During this time, youth shifted to greater reliance on social sources and less on commercial sources. A trend toward less commercial access to cigarettes accompanied by an increase in social access may translate to youth being less likely to progress to heavier smoking. PMID:17719080

  17. Development of Vertical Cable Seismic System (3)

    NASA Astrophysics Data System (ADS)

    Asakawa, E.; Murakami, F.; Tsukahara, H.; Mizohata, S.; Ishikawa, K.

    2013-12-01

    The VCS (Vertical Cable Seismic) is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. We carried out several VCS surveys combining with surface towed source, deep towed source and ocean bottom source. The water depths of the survey are from 100m up to 2100m. The target of the survey includes not only hydrothermal deposit but oil and gas exploration. Through these experiments, our VCS data acquisition system has been completed. But the data processing techniques are still on the way. One of the most critical issues is the positioning in the water. The uncertainty in the positions of the source and of the hydrophones in water degraded the quality of subsurface image. GPS navigation system are available on sea surface, but in case of deep-towed source or ocean bottom source, the accuracy of shot position with SSBL/USBL is not sufficient for the very high-resolution imaging. We have developed another approach to determine the positions in water using the travel time data from the source to VCS hydrophones. In the data acquisition stage, we estimate the position of VCS location with slant ranging method from the sea surface. The deep-towed source or ocean bottom source is estimated by SSBL/USBL. The water velocity profile is measured by XCTD. After the data acquisition, we pick the first break times of the VCS recorded data. The estimated positions of shot points and receiver points in the field include the errors. We use these data as initial guesses, we invert iteratively shot and receiver positions to match the travel time data. After several iterations we could finally estimate the most probable positions. Integration of the constraint of VCS hydrophone positions, such as the spacing is 10m, can accelerate the convergence of the iterative inversion and improve results. The accuracy of the estimated positions from the travel time date is enough for the VCS data processing.

  18. Performance improvement of continuous-variable quantum key distribution with an entangled source in the middle via photon subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Liao, Qin; Wang, Yijun; Huang, Duan; Huang, Peng; Zeng, Guihua

    2017-03-01

    A suitable photon-subtraction operation can be exploited to improve the maximal transmission of continuous-variable quantum key distribution (CVQKD) in point-to-point quantum communication. Unfortunately, the photon-subtraction operation faces solving the improvement transmission problem of practical quantum networks, where the entangled source is located in the third part, which may be controlled by a malicious eavesdropper, instead of in one of the trusted parts, controlled by Alice or Bob. In this paper, we show that a solution can come from using a non-Gaussian operation, in particular, the photon-subtraction operation, which provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that CVQKD with an entangled source in the middle (ESIM) from applying photon subtraction can well increase the secure transmission distance in both direct and reverse reconciliations of the EB-CVQKD scheme, even if the entangled source originates from an untrusted part. Moreover, it can defend against the inner-source attack, which is a specific attack by an untrusted entangled source in the framework of ESIM.

  19. The efficient model to define a single light source position by use of high dynamic range image of 3D scene

    NASA Astrophysics Data System (ADS)

    Wang, Xu-yang; Zhdanov, Dmitry D.; Potemin, Igor S.; Wang, Ying; Cheng, Han

    2016-10-01

    One of the challenges of augmented reality is a seamless combination of objects of the real and virtual worlds, for example light sources. We suggest a measurement and computation models for reconstruction of light source position. The model is based on the dependence of luminance of the small size diffuse surface directly illuminated by point like source placed at a short distance from the observer or camera. The advantage of the computational model is the ability to eliminate the effects of indirect illumination. The paper presents a number of examples to illustrate the efficiency and accuracy of the proposed method.

  20. Analysis of the load selection on the error of source characteristics identification for an engine exhaust system

    NASA Astrophysics Data System (ADS)

    Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin

    2015-05-01

    Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.

  1. Vehicle routing for the eco-efficient collection of household plastic waste.

    PubMed

    Bing, Xiaoyun; de Keizer, Marlies; Bloemhof-Ruwaard, Jacqueline M; van der Vorst, Jack G A J

    2014-04-01

    Plastic waste is a special category of municipal solid waste. Plastic waste collection is featured with various alternatives of collection methods (curbside/drop-off) and separation methods (source-/post-separation). In the Netherlands, the collection routes of plastic waste are the same as those of other waste, although plastic is different than other waste in terms of volume to weight ratio. This paper aims for redesigning the collection routes and compares the collection options of plastic waste using eco-efficiency as performance indicator. Eco-efficiency concerns the trade-off between environmental impacts, social issues and costs. The collection problem is modeled as a vehicle routing problem. A tabu search heuristic is used to improve the routes. Collection alternatives are compared by a scenario study approach. Real distances between locations are calculated with MapPoint. The scenario study is conducted based on real case data of the Dutch municipality Wageningen. Scenarios are designed according to the collection alternatives with different assumptions in collection method, vehicle type, collection frequency and collection points, etc. Results show that the current collection routes can be improved in terms of eco-efficiency performance by using our method. The source-separation drop-off collection scenario has the best performance for plastic collection assuming householders take the waste to the drop-off points in a sustainable manner. The model also shows to be an efficient decision support tool to investigate the impacts of future changes such as alternative vehicle type and different response rates. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  2. Sound Sources Identified in High-Speed Jets by Correlating Flow Density Fluctuations With Far-Field Noise

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Seasholtz, Richard G.

    2003-01-01

    Noise sources in high-speed jets were identified by directly correlating flow density fluctuation (cause) to far-field sound pressure fluctuation (effect). The experimental study was performed in a nozzle facility at the NASA Glenn Research Center in support of NASA s initiative to reduce the noise emitted by commercial airplanes. Previous efforts to use this correlation method have failed because the tools for measuring jet turbulence were intrusive. In the present experiment, a molecular Rayleigh-scattering technique was used that depended on laser light scattering by gas molecules in air. The technique allowed accurate measurement of air density fluctuations from different points in the plume. The study was conducted in shock-free, unheated jets of Mach numbers 0.95, 1.4, and 1.8. The turbulent motion, as evident from density fluctuation spectra was remarkably similar in all three jets, whereas the noise sources were significantly different. The correlation study was conducted by keeping a microphone at a fixed location (at the peak noise emission angle of 30 to the jet axis and 50 nozzle diameters away) while moving the laser probe volume from point to point in the flow. The following figure shows maps of the nondimensional coherence value measured at different Strouhal frequencies ([frequency diameter]/jet speed) in the supersonic Mach 1.8 and subsonic Mach 0.95 jets. The higher the coherence, the stronger the source was.

  3. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  4. On the Assessment of Acoustic Scattering and Shielding by Time Domain Boundary Integral Equation Solutions

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.; Pizzo, Michelle E.; Nark, Douglas M.

    2016-01-01

    Based on the time domain boundary integral equation formulation of the linear convective wave equation, a computational tool dubbed Time Domain Fast Acoustic Scattering Toolkit (TD-FAST) has recently been under development. The time domain approach has a distinct advantage that the solutions at all frequencies are obtained in a single computation. In this paper, the formulation of the integral equation, as well as its stabilization by the Burton-Miller type reformulation, is extended to cases of a constant mean flow in an arbitrary direction. In addition, a "Source Surface" is also introduced in the formulation that can be employed to encapsulate regions of noise sources and to facilitate coupling with CFD simulations. This is particularly useful for applications where the noise sources are not easily described by analytical source terms. Numerical examples are presented to assess the accuracy of the formulation, including a computation of noise shielding by a thin barrier motivated by recent Historical Baseline F31A31 open rotor noise shielding experiments. Furthermore, spatial resolution requirements of the time domain boundary element method are also assessed using point per wavelength metrics. It is found that, using only constant basis functions and high-order quadrature for surface integration, relative errors of less than 2% may be obtained when the surface spatial resolution is 5 points-per-wavelength (PPW) or 25 points-per-wavelength squared (PPW2).

  5. VizieR Online Data Catalog: Rotation measures of radio point sources (Xu+, 2014)

    NASA Astrophysics Data System (ADS)

    Xu, J.; Han, J.-L.

    2015-04-01

    We compiled a catalog of Faraday rotation measures (RMs) for 4553 extragalactic radio point sources published in literature. These RMs were derived from multi-frequency polarization observations. The RM data are compared to those in the NRAO VLA Sky Survey (NVSS) RM catalog. We reveal a systematic uncertainty of about 10.0+/-1.5rad/m2 in the NVSS RM catalog. The Galactic foreground RM is calculated through a weighted averaging method by using the compiled RM catalog together with the NVSS RM catalog, with careful consideration of uncertainties in the RM data. The data from the catalog and the interface for the Galactic foreground RM calculations are publicly available on the webpage: http://zmtt.bao.ac.cn/RM/ . (2 data files).

  6. Method and apparatus for improving resolution in spectrometers processing output steps from non-ideal signal sources

    DOEpatents

    Warburton, William K.; Momayezi, Michael

    2006-06-20

    A method and apparatus for processing step-like output signals (primary signals) generated by non-ideal, for example, nominally single-pole ("N-1P ") devices. An exemplary method includes creating a set of secondary signals by directing the primary signal along a plurality of signal paths to a signal summation point, summing the secondary signals reaching the signal summation point after propagating along the signal paths to provide a summed signal, performing a filtering or delaying operation in at least one of said signal paths so that the secondary signals reaching said summing point have a defined time correlation with respect to one another, applying a set of weighting coefficients to the secondary signals propagating along said signal paths, and performing a capturing operation after any filtering or delaying operations so as to provide a weighted signal sum value as a measure of the integrated area QgT of the input signal.

  7. Ultra-performance liquid chromatography/tandem mass spectrometric quantification of structurally diverse drug mixtures using an ESI-APCI multimode ionization source.

    PubMed

    Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S

    2007-01-01

    We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.

  8. Multiple point statistical simulation using uncertain (soft) conditional data

    NASA Astrophysics Data System (ADS)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  9. Modeling, control, and simulation of grid connected intelligent hybrid battery/photovoltaic system using new hybrid fuzzy-neural method.

    PubMed

    Rezvani, Alireza; Khalili, Abbas; Mazareie, Alireza; Gandomkar, Majid

    2016-07-01

    Nowadays, photovoltaic (PV) generation is growing increasingly fast as a renewable energy source. Nevertheless, the drawback of the PV system is its dependence on weather conditions. Therefore, battery energy storage (BES) can be considered to assist for a stable and reliable output from PV generation system for loads and improve the dynamic performance of the whole generation system in grid connected mode. In this paper, a novel topology of intelligent hybrid generation systems with PV and BES in a DC-coupled structure is presented. Each photovoltaic cell has a specific point named maximum power point on its operational curve (i.e. current-voltage or power-voltage curve) in which it can generate maximum power. Irradiance and temperature changes affect these operational curves. Therefore, the nonlinear characteristic of maximum power point to environment has caused to development of different maximum power point tracking techniques. In order to capture the maximum power point (MPP), a hybrid fuzzy-neural maximum power point tracking (MPPT) method is applied in the PV system. Obtained results represent the effectiveness and superiority of the proposed method, and the average tracking efficiency of the hybrid fuzzy-neural is incremented by approximately two percentage points in comparison to the conventional methods. It has the advantages of robustness, fast response and good performance. A detailed mathematical model and a control approach of a three-phase grid-connected intelligent hybrid system have been proposed using Matlab/Simulink. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses

    NASA Technical Reports Server (NTRS)

    Lee, Man Hoi; Spergel, David N.

    1990-01-01

    The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.

  11. Method and system for controlling the position of a beam of light

    DOEpatents

    Steinkraus, Jr., Robert F.; Johnson, Gary W [Livermore, CA; Ruggiero, Anthony J [Livermore, CA

    2011-08-09

    An method and system for laser beam tracking and pointing is based on a conventional position sensing detector (PSD) or quadrant cell but with the use of amplitude-modulated light. A combination of logarithmic automatic gain control, filtering, and synchronous detection offers high angular precision with exceptional dynamic range and sensitivity, while maintaining wide bandwidth. Use of modulated light enables the tracking of multiple beams simultaneously through the use of different modulation frequencies. It also makes the system resistant to interfering light sources such as ambient light. Beam pointing is accomplished by feeding back errors in the measured beam position to a beam steering element, such as a steering mirror. Closed-loop tracking performance is superior to existing methods, especially under conditions of atmospheric scintillation.

  12. A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals

    NASA Technical Reports Server (NTRS)

    Skelton, R. T.; Mahoney, W. A.

    1993-01-01

    We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.

  13. Serendipitous observations of asteroids in Herschel PACS and SPIRE maps

    NASA Astrophysics Data System (ADS)

    Szakáts, R.; Kiss, Cs.; Marton, G.; Varga-Verebélyi, E.; Müller, T.; Pál, A.

    2017-09-01

    We present our methods and results in finding serendipitous solar system objects on Herschel PACS and SPIRE maps. We can use this data to supplement the Herschel PACS and SPIRE point source catalogs with flags of possible contamination and to obtain thermal infrared fluxes for these asteroids.

  14. 40 CFR 406.11 - Specialized definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STANDARDS GRAIN MILLS POINT SOURCE CATEGORY Corn Wet Milling Subcategory § 406.11 Specialized definitions... and methods of analysis set forth in 40 CFR part 401 shall apply to this subpart. (b) The term corn shall mean the shelled corn delivered to a plant before processing. (c) The term standard bushel shall...

  15. SIMULATIONS OF AEROSOLS AND PHOTOCHEMICAL SPECIES WITH THE CMAQ PLUME-IN-GRID MODELING SYSTEM

    EPA Science Inventory

    A plume-in-grid (PinG) method has been an integral component of the CMAQ modeling system and has been designed in order to realistically simulate the relevant processes impacting pollutant concentrations in plumes released from major point sources. In particular, considerable di...

  16. 40 CFR 63.11466 - What are the performance test requirements for new and existing sources?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (Appendix A-1) to select sampling port locations and the number of traverse points in each stack or duct... of the stack gas. (iii) Method 3, 3A, or 3B (Appendix A-2) to determine the dry molecular weight of...

  17. Problems in depth perception : a method of simulating objects moving in depth.

    DOT National Transportation Integrated Search

    1965-12-01

    Equations were developed for the simulation on a screen of the movement of an object or surface toward or away from an observer by the movement of a positive photographic transparency of the object or surface away or toward a point source. The genera...

  18. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  19. Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing

    2017-12-01

    Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.

  20. Optimal simultaneous superpositioning of multiple structures with missing data.

    PubMed

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  1. Dem Generation from Close-Range Photogrammetry Using Extended Python Photogrammetry Toolbox

    NASA Astrophysics Data System (ADS)

    Belmonte, A. A.; Biong, M. M. P.; Macatulad, E. G.

    2017-10-01

    Digital elevation models (DEMs) are widely used raster data for different applications concerning terrain, such as for flood modelling, viewshed analysis, mining, land development, engineering design projects, to name a few. DEMs can be obtained through various methods, including topographic survey, LiDAR or photogrammetry, and internet sources. Terrestrial close-range photogrammetry is one of the alternative methods to produce DEMs through the processing of images using photogrammetry software. There are already powerful photogrammetry software that are commercially-available and can produce high-accuracy DEMs. However, this entails corresponding cost. Although, some of these software have free or demo trials, these trials have limits in their usable features and usage time. One alternative is the use of free and open-source software (FOSS), such as the Python Photogrammetry Toolbox (PPT), which provides an interface for performing photogrammetric processes implemented through python script. For relatively small areas such as in mining or construction excavation, a relatively inexpensive, fast and accurate method would be advantageous. In this study, PPT was used to generate 3D point cloud data from images of an open pit excavation. The PPT was extended to add an algorithm converting the generated point cloud data into a usable DEM.

  2. A New Method for Calculating Counts in Cells

    NASA Astrophysics Data System (ADS)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  3. Experimental Evaluation of the High-Speed Motion Vector Measurement by Combining Synthetic Aperture Array Processing with Constrained Least Square Method

    NASA Astrophysics Data System (ADS)

    Yokoyama, Ryouta; Yagi, Shin-ichi; Tamura, Kiyoshi; Sato, Masakazu

    2009-07-01

    Ultrahigh speed dynamic elastography has promising potential capabilities in applying clinical diagnosis and therapy of living soft tissues. In order to realize the ultrahigh speed motion tracking at speeds of over thousand frames per second, synthetic aperture (SA) array signal processing technology must be introduced. Furthermore, the overall system performance should overcome the fine quantitative evaluation in accuracy and variance of echo phase changes distributed across a tissue medium. On spatial evaluation of local phase changes caused by pulsed excitation on a tissue phantom, investigation was made with the proposed SA signal system utilizing different virtual point sources that were generated by an array transducer to probe each component of local tissue displacement vectors. The final results derived from the cross-correlation method (CCM) brought about almost the same performance as obtained by the constrained least square method (LSM) extended to successive echo frames. These frames were reconstructed by SA processing after the real-time acquisition triggered by the pulsed irradiation from a point source. The continuous behavior of spatial motion vectors demonstrated the dynamic generation and traveling of the pulsed shear wave at a speed of one thousand frames per second.

  4. Computer simulation of reconstructed image for computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Yasuda, Tomoki; Kitamura, Mitsuru; Watanabe, Masachika; Tsumuta, Masato; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    This report presents the results of computer simulation images for image-type Computer-Generated Holograms (CGHs) observable under white light fabricated with an electron beam lithography system. The simulated image is obtained by calculating wavelength and intensity of diffracted light traveling toward the viewing point from the CGH. Wavelength and intensity of the diffracted light are calculated using FFT image generated from interference fringe data. Parallax image of CGH corresponding to the viewing point can be easily obtained using this simulation method. Simulated image from interference fringe data was compared with reconstructed image of real CGH with an Electron Beam (EB) lithography system. According to the result, the simulated image resembled the reconstructed image of the CGH closely in shape, parallax, coloring and shade. And, in accordance with the shape of the light sources the simulated images which were changed in chroma saturation and blur by using two kinds of simulations: the several light sources method and smoothing method. In addition, as the applications of the CGH, full-color CGH and CGH with multiple images were simulated. The result was that the simulated images of those CGHs closely resembled the reconstructed image of real CGHs.

  5. Extracting the Essential Cartographic Functionality of Programs on the Web

    NASA Astrophysics Data System (ADS)

    Ledermann, Florian

    2018-05-01

    Following Aristotle, F. P. Brooks (1987) emphasizes the distinction between "essential difficulties" and "accidental difficulties" as a key challenge in software engineering. From the point of view of cartography, it would be desirable to identify the cartographic essence of a program, and subject it to additional scrutiny, while its accidental proper-ties, again from the point of view of cartography, are usually of lesser relevance to cartographic analysis. In this paper, two methods that facilitate extracting the cartographic essence of programs are presented: close reading of their source code, and the automated analysis of their runtime behavior. The advantages and shortcomings of both methods are discussed, followed by an outlook to future developments and potential applications.

  6. Method for determining size of inhomogeneity localization region based on analysis of secondary wave field of second harmonic

    NASA Astrophysics Data System (ADS)

    Chernov, N. N.; Zagray, N. P.; Laguta, M. V.; Varenikova, A. Yu

    2018-05-01

    The article describes the research of the method of localization and determining the size of heterogeneity in biological tissues. The equation for the acoustic harmonic wave, which propagates in the positive direction, is taken as the main one. A three-dimensional expression that describes the field of secondary sources at the observation point is obtained. The simulation of the change of the amplitude values of the vibrational velocity of the second harmonic of the acoustic wave at different coordinates of the inhomogeneity location in three-dimensional space is carried out. For the convenience of mathematical calculations, the area of heterogeneity is reduced to a point.

  7. Dosimetry of 192Ir sources used for endovascular brachytherapy

    NASA Astrophysics Data System (ADS)

    Reynaert, N.; Van Eijkeren, M.; Taeymans, Y.; Thierens, H.

    2001-02-01

    An in-phantom calibration technique for 192Ir sources used for endovascular brachytherapy is presented. Three different source lengths were investigated. The calibration was performed in a solid phantom using a Farmer-type ionization chamber at source to detector distances ranging from 1 cm to 5 cm. The dosimetry protocol for medium-energy x-rays extended with a volume-averaging correction factor was used to convert the chamber reading to dose to water. The air kerma strength of the sources was determined as well. EGS4 Monte Carlo calculations were performed to determine the depth dose distribution at distances ranging from 0.6 mm to 10 cm from the source centre. In this way we were able to convert the absolute dose rate at 1 cm distance to the reference point chosen at 2 mm distance. The Monte Carlo results were confirmed by radiochromic film measurements, performed with a double-exposure technique. The dwell times to deliver a dose of 14 Gy at the reference point were determined and compared with results given by the source supplier (CORDIS). They determined the dwell times from a Sievert integration technique based on the source activity. The results from both methods agreed to within 2% for the 12 sources that were evaluated. A Visual Basic routine that superimposes dose distributions, based on the Monte Carlo calculations and the in-phantom calibration, onto intravascular ultrasound images is presented. This routine can be used as an online treatment planning program.

  8. A higher order panel method for linearized supersonic flow

    NASA Technical Reports Server (NTRS)

    Ehlers, F. E.; Epton, M. A.; Johnson, F. T.; Magnus, A. E.; Rubbert, P. E.

    1979-01-01

    The basic integral equations of linearized supersonic theory for an advanced supersonic panel method are derived. Methods using only linear varying source strength over each panel or only quadratic doublet strength over each panel gave good agreement with analytic solutions over cones and zero thickness cambered wings. For three dimensional bodies and wings of general shape, combined source and doublet panels with interior boundary conditions to eliminate the internal perturbations lead to a stable method providing good agreement experiment. A panel system with all edges contiguous resulted from dividing the basic four point non-planar panel into eight triangular subpanels, and the doublet strength was made continuous at all edges by a quadratic distribution over each subpanel. Superinclined panels were developed and tested on s simple nacelle and on an airplane model having engine inlets, with excellent results.

  9. Ghost imaging with bucket detection and point detection

    NASA Astrophysics Data System (ADS)

    Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao

    2018-04-01

    We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.

  10. Strong ground motion simulation of the 2016 Kumamoto earthquake of April 16 using multiple point sources

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yosuke; Nozu, Atsushi

    2017-02-01

    The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.

  11. Optical Remote Sensing Method to Determine Strength of Non-point Sources

    DTIC Science & Technology

    2008-09-01

    site due to its location, which is convenient to both USEPA’s RTP campus and the ARCADIS-Durham office. The site also has appropriate NPSs to measure...campus and the ARCADIS-Durham office. The site also has appropriate NPSs to measure that are of interest to regulators. 3.2.4 Tinker Air Force Base...Existing methodology for measuring NPSs is not directly comparable to the proposed PI-ORS method because the new method provides higher quality and

  12. STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu

    2011-09-10

    An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less

  13. Multi-source SO2 emission retrievals and consistency of satellite and surface measurements with reported emissions

    NASA Astrophysics Data System (ADS)

    Fioletov, Vitali; McLinden, Chris A.; Kharol, Shailesh K.; Krotkov, Nickolay A.; Li, Can; Joiner, Joanna; Moran, Michael D.; Vet, Robert; Visschedijk, Antoon J. H.; Denier van der Gon, Hugo A. C.

    2017-10-01

    Reported sulfur dioxide (SO2) emissions from US and Canadian sources have declined dramatically since the 1990s as a result of emission control measures. Observations from the Ozone Monitoring Instrument (OMI) on NASA's Aura satellite and ground-based in situ measurements are examined to verify whether the observed changes from SO2 abundance measurements are quantitatively consistent with the reported changes in emissions. To make this connection, a new method to link SO2 emissions and satellite SO2 measurements was developed. The method is based on fitting satellite SO2 vertical column densities (VCDs) to a set of functions of OMI pixel coordinates and wind speeds, where each function represents a statistical model of a plume from a single point source. The concept is first demonstrated using sources in North America and then applied to Europe. The correlation coefficient between OMI-measured VCDs (with a local bias removed) and SO2 VCDs derived here using reported emissions for 1° by 1° gridded data is 0.91 and the best-fit line has a slope near unity, confirming a very good agreement between observed SO2 VCDs and reported emissions. Having demonstrated their consistency, seasonal and annual mean SO2 VCD distributions are calculated, based on reported point-source emissions for the period 1980-2015, as would have been seen by OMI. This consistency is further substantiated as the emission-derived VCDs also show a high correlation with annual mean SO2 surface concentrations at 50 regional monitoring stations.

  14. A catalogue of AKARI FIS BSC extragalactic objects

    NASA Astrophysics Data System (ADS)

    Marton, Gabor; Toth, L. Viktor; Gyorgy Balazs, Lajos

    2015-08-01

    We combined photometric data of about 70 thousand point sources from the AKARI Far-Infrared Surveyor Bright Source Catalogue with AllWISE catalogue data to identify galaxies. We used Quadratic Discriminant Analysis (QDA) to classify our sources. The classification was based on a 6D parameter space that contained AKARI [F65/F90], [F90/F140], [F140/F160] and WISE W1-W2 colours along with WISE W1 magnitudes and AKARI [F140] flux values. Sources were classified into 3 main objects types: YSO candidates, evolved stars and galaxies. The training samples were SIMBAD entries of the input point sources wherever an associated SIMBAD object was found within a 30 arcsecond search radius. The QDA resulted more than 5000 AKARI galaxy candidate sources. The selection was tested cross-correlating our AKARI extragalactic catalogue with the Revised IRAS-FSC Redshift Catalogue (RIFSCz). A very good match was found. A further classification attempt was also made to differentiate between extragalactic subtypes using Support Vector Machines (SVMs). The results of the various methods showed that we can confidently separate cirrus dominated objects (type 1 of RIFSCz). Some of our “galaxy candidate” sources are associated with 2MASS extended objects, and listed in the NASA Extragalactic Database so far without clear proofs of their extragalactic nature. Examples will be presented in our poster. Finally other AKARI extragalactic catalogues will be also compared to our statistical selection.

  15. MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID

    EPA Science Inventory

    Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...

  16. X-ray Point Source Populations in Spiral and Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E.; Heckman, T.; Weaver, K.; Ptak, A.; Strickland, D.

    2001-12-01

    In the years of the Einstein and ASCA satellites, it was known that the total hard X-ray luminosity from non-AGN galaxies was fairly well correlated with the total blue luminosity. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Now, for the first time, we know from Chandra images that a significant amount of the total hard X-ray emission comes from individual X-ray point sources. We present here spatial and spectral analyses of Chandra data for X-ray point sources in a sample of ~40 galaxies, including both spiral galaxies (starbursts and non-starbursts) and elliptical galaxies. We shall discuss the relationship between the X-ray point source population and the properties of the host galaxies. We show that the slopes of the point-source X-ray luminosity functions are different for different host galaxy types and discuss possible reasons why. We also present detailed X-ray spectral analyses of several of the most luminous X-ray point sources (i.e., IXOs, a.k.a. ULXs), and discuss various scenarios for the origin of the X-ray point sources.

  17. D Semantic Labeling of ALS Data Based on Domain Adaption by Transferring and Fusing Random Forest Models

    NASA Astrophysics Data System (ADS)

    Wu, J.; Yao, W.; Zhang, J.; Li, Y.

    2018-04-01

    Labeling 3D point cloud data with traditional supervised learning methods requires considerable labelled samples, the collection of which is cost and time expensive. This work focuses on adopting domain adaption concept to transfer existing trained random forest classifiers (based on source domain) to new data scenes (target domain), which aims at reducing the dependence of accurate 3D semantic labeling in point clouds on training samples from the new data scene. Firstly, two random forest classifiers were firstly trained with existing samples previously collected for other data. They were different from each other by using two different decision tree construction algorithms: C4.5 with information gain ratio and CART with Gini index. Secondly, four random forest classifiers adapted to the target domain are derived through transferring each tree in the source random forest models with two types of operations: structure expansion and reduction-SER and structure transfer-STRUT. Finally, points in target domain are labelled by fusing the four newly derived random forest classifiers using weights of evidence based fusion model. To validate our method, experimental analysis was conducted using 3 datasets: one is used as the source domain data (Vaihingen data for 3D Semantic Labelling); another two are used as the target domain data from two cities in China (Jinmen city and Dunhuang city). Overall accuracies of 85.5 % and 83.3 % for 3D labelling were achieved for Jinmen city and Dunhuang city data respectively, with only 1/3 newly labelled samples compared to the cases without domain adaption.

  18. A Deep XMM-Newton Survey of M33: Point-source Catalog, Source Detection, and Characterization of Overlapping Fields

    NASA Astrophysics Data System (ADS)

    Williams, Benjamin F.; Wold, Brian; Haberl, Frank; Garofali, Kristen; Blair, William P.; Gaetz, Terrance J.; Kuntz, K. D.; Long, Knox S.; Pannuti, Thomas G.; Pietsch, Wolfgang; Plucinsky, Paul P.; Winkler, P. Frank

    2015-05-01

    We have obtained a deep 8 field XMM-Newton mosaic of M33 covering the galaxy out to the D25 isophote and beyond to a limiting 0.2-4.5 keV unabsorbed flux of 5 × 10-16 erg cm-2 s-1 (L \\gt 4 × 1034 erg s-1 at the distance of M33). These data allow complete coverage of the galaxy with high sensitivity to soft sources such as diffuse hot gas and supernova remnants (SNRs). Here, we describe the methods we used to identify and characterize 1296 point sources in the 8 fields. We compare our resulting source catalog to the literature, note variable sources, construct hardness ratios, classify soft sources, analyze the source density profile, and measure the X-ray luminosity function (XLF). As a result of the large effective area of XMM-Newton below 1 keV, the survey contains many new soft X-ray sources. The radial source density profile and XLF for the sources suggest that only ˜15% of the 391 bright sources with L \\gt 3.6 × 1035 erg s-1 are likely to be associated with M33, and more than a third of these are known SNRs. The log(N)-log(S) distribution, when corrected for background contamination, is a relatively flat power law with a differential index of 1.5, which suggests that many of the other M33 sources may be high-mass X-ray binaries. Finally, we note the discovery of an interesting new transient X-ray source, which we are unable to classify.

  19. A possible explanation of the parallel tracks in kilohertz quasi-periodic oscillations from low-mass-X-ray binaries

    NASA Astrophysics Data System (ADS)

    Shi, Chang-Sheng; Zhang, Shuang-Nan; Li, Xiang-Dong

    2018-05-01

    We recalculate the modes of the magnetohydrodynamics (MHD) waves in the MHD model (Shi, Zhang & Li 2014) of the kilohertz quasi-periodic oscillations (kHz QPOs) in neutron star low mass X-ray binaries (NS-LMXBs), in which the compressed magnetosphere is considered. A method on point-by-point scanning for every parameter of a normal LMXBs is proposed to determine the wave number in a NS-LMXB. Then dependence of the twin kHz QPO frequencies on accretion rates (\\dot{M}) is obtained with the wave number and magnetic field (B*) determined by our method. Based on the MHD model, a new explanation of the parallel tracks, i.e. the slowly varying effective magnetic field leads to the shift of parallel tracks in a source, is presented. In this study, we obtain a simple power-law relation between the kHz QPO frequencies and \\dot{M}/B_{\\ast }^2 in those sources. Finally, we study the dependence of kHz quasi-periodic oscillation frequencies on the spin, mass and radius of a neutron star. We find that the effective magnetic field, the spin, mass and radius of a neutron star lead to the parallel tracks in different sources.

  20. Improved response functions for gamma-ray skyshine analyses

    NASA Astrophysics Data System (ADS)

    Shultis, J. K.; Faw, R. E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.

  1. Regional Scale Simulations of Nitrate Leaching through Agricultural Soils of California

    NASA Astrophysics Data System (ADS)

    Diamantopoulos, E.; Walkinshaw, M.; O'Geen, A. T.; Harter, T.

    2016-12-01

    Nitrate is recognized as one of California's most widespread groundwater contaminants. As opposed to point sources, which are relative easily identifiable sources of contamination, non-point sources of nitrate are diffuse and linked with widespread use of fertilizers in agricultural soils. California's agricultural regions have an incredible diversity of soils that encompass a huge range of properties. This complicates studies dealing with nitrate risk assessment, since important biological and physicochemical processes appear at the first meters of the vadose zone. The objective of this study is to evaluate all agricultural soils in California according to their potentiality for nitrate leaching based on numerical simulations using the Richards equation. We conducted simulations for 6000 unique soil profiles (over 22000 soil horizons) taking into account the effect of climate, crop type, irrigation and fertilization management scenarios. The final goal of this study is to evaluate simple management methods in terms of reduced nitrate leaching. We estimated drainage rates of water under the root zone and nitrate concentrations in the drain water at the regional scale. We present maps for all agricultural soils in California which can be used for risk assessment studies. Finally, our results indicate that adoption of simple irrigation and fertilization methods may significantly reduce nitrate leaching in vulnerable regions.

  2. The local density of optical states of a metasurface

    NASA Astrophysics Data System (ADS)

    Lunnemann, Per; Koenderink, A. Femius

    2016-02-01

    While metamaterials are often desirable for near-field functions, such as perfect lensing, or cloaking, they are often quantified by their response to plane waves from the far field. Here, we present a theoretical analysis of the local density of states near lattices of discrete magnetic scatterers, i.e., the response to near field excitation by a point source. Based on a pointdipole theory using Ewald summation and an array scanning method, we can swiftly and semi-analytically evaluate the local density of states (LDOS) for magnetoelectric point sources in front of an infinite two-dimensional (2D) lattice composed of arbitrary magnetoelectric dipole scatterers. The method takes into account radiation damping as well as all retarded electrodynamic interactions in a self-consistent manner. We show that a lattice of magnetic scatterers evidences characteristic Drexhage oscillations. However, the oscillations are phase shifted relative to the electrically scattering lattice consistent with the difference expected for reflection off homogeneous magnetic respectively electric mirrors. Furthermore, we identify in which source-surface separation regimes the metasurface may be treated as a homogeneous interface, and in which homogenization fails. A strong frequency and in-plane position dependence of the LDOS close to the lattice reveals coupling to guided modes supported by the lattice.

  3. [Evaluation of environmental conditions: air, water and soil in areas of mining activity in Boyacá, Colombia].

    PubMed

    Agudelo-Calderón, Carlos A; Quiroz-Arcentales, Leonardo; García-Ubaque, Juan C; Robledo-Martínez, Rocío; García-Ubaque, Cesar A

    2016-02-01

    Objectives To determine concentrations of PM10, mercury and lead in indoor air of homes, water sources and soil in municipalities near mining operations. Method 6 points were evaluated in areas of influence and 2 in control areas. For measurements of indoor air, we used the NIOSH 600 method (PM10), NIOSH 6009 (mercury) and NIOSH 7300 (lead). For water analysis we used the IDEAM Guide for monitoring discharges. For soil analysis, we used the cold vapor technique (mercury) and atomic absorption (lead). Results In almost all selected households, the average PM10 and mercury concentrations in indoor air exceeded applicable air quality standards. Concentrations of lead were below standard levels. In all water sources, high concentrations of lead were found and in some places within the mining areas, high levels of iron, aluminum and mercury were also found. In soil, mercury concentrations were below the detection level and for lead, differences between the monitored points were observed. Conclusions The results do not establish causal relationships between mining and concentration of these pollutants in the evaluated areas because of the multiplicity of sources in the area. However, such studies provide important information, useful to agents of the environmental health system and researchers. Installation of networks for environmental monitoring to obtain continuous reports is suggested.

  4. Improved response functions for gamma-ray skyshine analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less

  5. The fast neutron fluence and the activation detector activity calculations using the effective source method and the adjoint function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hep, J.; Konecna, A.; Krysl, V.

    2011-07-01

    This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weightingmore » is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV and RPV cavity of the VVER-440 reacto rand located axially at the position of maximum power and at the position of the weld. Both of these methods (the effective source and the adjoint function) are briefly described in the present paper. The paper also describes their application to the solution of fast neutron fluence and detectors activities for the VVER-440 reactor. (authors)« less

  6. Tools, information sources, and methods used in deciding on drug availability in HMOs.

    PubMed

    Barner, J C; Thomas, J

    1998-01-01

    The use and importance of specific decision-making tools, information sources, and drug-use management methods in determining drug availability and use in HMOs were studied. A questionnaire was sent to 303 randomly selected HMOs. Respondents were asked to rate their use of each of four formal decision-making tools and its relative importance, as well as the use and importance of eight information sources and 11 methods for managing drug availability and use, on a 5-point scale. The survey response rate was 28%. Approximately half of the respondents reported that their HMOs used decision analysis or multiattribute analysis in deciding on drug availability. If used, these tools were rated as very important. There were significant differences in levels of use by HMO type, membership size, and age. Journal articles and reference books were reported most often as information sources. Retrospective drug-use review was used very often and perceived to be very important in managing drug use. Other management methods were used only occasionally, but the importance placed on these tools when used ranged from moderately to very important. Older organizations used most of the management methods more often than did other HMOs. Decision analysis and multiattribute analysis were the most commonly used tools for deciding on which drugs to make available to HMO members, and reference books and journal articles were the most commonly used information sources. Retrospective and prospective drug-use reviews were the most commonly applied methods for managing HMO members' access to drugs.

  7. Determining Hypocentral Parameters for Local Earthquakes in 1-D Using a Genetic Algorithm and Two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.

    2005-12-01

    This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing

  8. Discrimination between diffuse and point sources of arsenic at Zimapán, Hidalgo state, Mexico.

    PubMed

    Sracek, Ondra; Armienta, María Aurora; Rodríguez, Ramiro; Villaseñor, Guadalupe

    2010-01-01

    There are two principal sources of arsenic in Zimapán. Point sources are linked to mining and smelting activities and especially to mine tailings. Diffuse sources are not well defined and are linked to regional flow systems in carbonate rocks. Both sources are caused by the oxidation of arsenic-rich sulfidic mineralization. Point sources are characterized by Ca-SO(4)-HCO(3) ground water type and relatively enriched values of deltaD, delta(18)O, and delta(34)S(SO(4)). Diffuse sources are characterized by Ca-Na-HCO(3) type of ground water and more depleted values of deltaD, delta(18)O, and delta(34)S(SO(4)). Values of deltaD and delta(18)O indicate similar altitude of recharge for both arsenic sources and stronger impact of evaporation for point sources in mine tailings. There are also different values of delta(34)S(SO(4)) for both sources, presumably due to different types of mineralization or isotopic zonality in deposits. In Principal Component Analysis (PCA), the principal component 1 (PC1), which describes the impact of sulfide oxidation and neutralization by the dissolution of carbonates, has higher values in samples from point sources. In spite of similar concentrations of As in ground water affected by diffuse sources and point sources (mean values 0.21 mg L(-1) and 0.31 mg L(-1), respectively, in the years from 2003 to 2008), the diffuse sources have more impact on the health of population in Zimapán. This is caused by the extraction of ground water from wells tapping regional flow system. In contrast, wells located in the proximity of mine tailings are not generally used for water supply.

  9. Interplanetary Scintillation studies with the Murchison Wide-field Array III: Comparison of source counts and densities for radio sources and their sub-arcsecond components at 162 MHz

    NASA Astrophysics Data System (ADS)

    Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.

    2018-06-01

    We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.

  10. Site correction of stochastic simulation in southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan

    2014-05-01

    Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.

  11. A comparison of partially specular radiosity and ray tracing for room acoustics modeling

    NASA Astrophysics Data System (ADS)

    Beamer, C. Walter; Muehleisen, Ralph T.

    2005-04-01

    Partially specular (PS) radiosity is an extended form of the general radiosity method. Acoustic radiosity is a form of bulk transfer of radiant acoustic energy. This bulk transfer is accomplished through a system of energy balance equations that relate the bulk energy transfer of each surface in the system to all other surfaces in the system. Until now acoustic radiosity has been limited to modeling only diffuse surface reflection. The new PS acoustic radiosity method can model all real surface types, diffuse, specular and everything in between. PS acoustic radiosity also models all real source types and distributions, not just point sources. The results of the PS acoustic radiosity method are compared to those of well known ray tracing programs. [Work supported by NSF.

  12. Automatic Classification of Trees from Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2015-08-01

    Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.

  13. A Data Cleaning Method for Big Trace Data Using Movement Consistency

    PubMed Central

    Tang, Luliang; Zhang, Xia; Li, Qingquan

    2018-01-01

    Given the popularization of GPS technologies, the massive amount of spatiotemporal GPS traces collected by vehicles are becoming a new kind of big data source for urban geographic information extraction. The growing volume of the dataset, however, creates processing and management difficulties, while the low quality generates uncertainties when investigating human activities. Based on the conception of the error distribution law and position accuracy of the GPS data, we propose in this paper a data cleaning method for this kind of spatial big data using movement consistency. First, a trajectory is partitioned into a set of sub-trajectories using the movement characteristic points. In this process, GPS points indicate that the motion status of the vehicle has transformed from one state into another, and are regarded as the movement characteristic points. Then, GPS data are cleaned based on the similarities of GPS points and the movement consistency model of the sub-trajectory. The movement consistency model is built using the random sample consensus algorithm based on the high spatial consistency of high-quality GPS data. The proposed method is evaluated based on extensive experiments, using GPS trajectories generated by a sample of vehicles over a 7-day period in Wuhan city, China. The results show the effectiveness and efficiency of the proposed method. PMID:29522456

  14. Impurity Correction Techniques Applied to Existing Doping Measurements of Impurities in Zinc

    NASA Astrophysics Data System (ADS)

    Pearce, J. V.; Sun, J. P.; Zhang, J. T.; Deng, X. L.

    2017-01-01

    Impurities represent the most significant source of uncertainty in most metal fixed points used for the realization of the International Temperature Scale of 1990 (ITS-90). There are a number of different methods for quantifying the effect of impurities on the freezing temperature of ITS-90 fixed points, many of which rely on an accurate knowledge of the liquidus slope in the limit of low concentration. A key method of determining the liquidus slope is to measure the freezing temperature of a fixed-point material as it is progressively doped with a known amount of impurity. Recently, a series of measurements of the freezing and melting temperature of `slim' Zn fixed-point cells doped with Ag, Fe, Ni, and Pb were presented. Here, additional measurements of the Zn-X system are presented using Ga as a dopant, and the data (Zn-Ag, Zn-Fe, Zn-Ni, Zn-Pb, and Zn-Ga) have been re-analyzed to demonstrate the use of a fitting method based on Scheil solidification which is applied to both melting and freezing curves. In addition, the utility of the Sum of Individual Estimates method is explored with these systems in the context of a recently enhanced database of liquidus slopes of impurities in Zn in the limit of low concentration.

  15. Using SPARROW to Model Total Nitrogen Sources, and Transport in Rivers and Streams of California and Adjacent States, U.S.A

    NASA Astrophysics Data System (ADS)

    Saleh, D.; Domagalski, J. L.

    2012-12-01

    Sources and factors affecting the transport of total nitrogen are being evaluated for a study area that covers most of California and some areas in Oregon and Nevada, by using the SPARROW model (SPAtially Referenced Regression On Watershed attributes) developed by the U.S. Geological Survey. Mass loads of total nitrogen calculated for monitoring sites at stream gauging stations are regressed against land-use factors affecting nitrogen transport, including fertilizer use, recharge, atmospheric deposition, stream characteristics, and other factors to understand how total nitrogen is transported under average conditions. SPARROW models have been used successfully in other parts of the country to understand how nutrients are transported, and how management strategies can be formulated, such as with Total Maximum Daily Load (TMDL) assessments. Fertilizer use, atmospheric deposition, and climatic data were obtained for 2002, and loads for that year were calculated for monitored streams and point sources (mostly from wastewater treatment plants). The stream loads were calculated by using the adjusted maximum likelihood estimation method (AMLE). River discharge and nitrogen concentrations were de-trended in these calculations in order eliminate the effect of temporal changes on stream load. Effluent discharge information as well as total nitrogen concentrations from point sources were obtained from USEPA databases and from facility records. The model indicates that atmospheric deposition and fertilizer use account for a large percentage of the total nitrogen load in many of the larger watersheds throughout the study area. Point sources, on the other hand, are generally localized around large cities, are considered insignificant sources, and account for a small percentage of the total nitrogen loads throughout the study area.

  16. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  17. Near real time water quality monitoring of Chivero and Manyame lakes of Zimbabwe

    NASA Astrophysics Data System (ADS)

    Muchini, Ronald; Gumindoga, Webster; Togarepi, Sydney; Pinias Masarira, Tarirai; Dube, Timothy

    2018-05-01

    Zimbabwe's water resources are under pressure from both point and non-point sources of pollution hence the need for regular and synoptic assessment. In-situ and laboratory based methods of water quality monitoring are point based and do not provide a synoptic coverage of the lakes. This paper presents novel methods for retrieving water quality parameters in Chivero and Manyame lakes, Zimbabwe, from remotely sensed imagery. Remotely sensed derived water quality parameters are further validated using in-situ data. It also presents an application for automated retrieval of those parameters developed in VB6, as well as a web portal for disseminating the water quality information to relevant stakeholders. The web portal is developed, using Geoserver, open layers and HTML. Results show the spatial variation of water quality and an automated remote sensing and GIS system with a web front end to disseminate water quality information.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.

    Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less

  19. Information entropy to measure the spatial and temporal complexity of solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Li, Weiyao; Huang, Guanhua; Xiong, Yunwu

    2016-04-01

    The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.

  20. Method for quick thermal tolerancing of optical systems

    NASA Astrophysics Data System (ADS)

    Werschnik, J.; Uhlendorf, K.

    2016-09-01

    Optical systems for lithography (projection lens), inspection (micro-objectives) or laser material processing usually have tight specifications regarding focus and wave-front stability. The same is true regarding the field dependent properties. Especially projection lenses have tight specifications on field curvature, magnification and distortion. Unwanted heating either from internal or external sources lead to undesired changes of the above properties. In this work we show an elegant and fast method to analyze the thermal sensitivity using ZEMAX. The key point of this method is using the thermal changes of the lens data from the multi-configuration editor as starting point for a (standard) tolerance analysis. Knowing the sensitivity we can either define requirements on the environment or use it to systematically improve the thermal behavior of the lens. We demonstrate this method for a typical projection lens for which we optimized the thermal field curvature to a minimum.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Heidelberger, Philip; Sugawara, Yutaka

    An apparatus and method for extending the scalability and improving the partitionability of networks that contain all-to-all links for transporting packet traffic from a source endpoint to a destination endpoint with low per-endpoint (per-server) cost and a small number of hops. An all-to-all wiring in the baseline topology is decomposed into smaller all-to-all components in which each smaller all-to-all connection is replaced with star topology by using global switches. Stacking multiple copies of the star topology baseline network creates a multi-planed switching topology for transporting packet traffic. Point-to-point unified stacking method using global switch wiring methods connects multiple planes ofmore » a baseline topology by using the global switches to create a large network size with a low number of hops, i.e., low network latency. Grouped unified stacking method increases the scalability (network size) of a stacked topology.« less

  2. Urban sound energy reduction by means of sound barriers

    NASA Astrophysics Data System (ADS)

    Iordache, Vlad; Ionita, Mihai Vlad

    2018-02-01

    In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  3. Grid-based precision aim system and method for disrupting suspect objects

    DOEpatents

    Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.

    2014-06-10

    A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.

  4. A new point contact surface acoustic wave transducer for measurement of acoustoelastic effect of polymethylmethacrylate.

    PubMed

    Lee, Yung-Chun; Kuo, Shi Hoa

    2004-01-01

    A new acoustic transducer and measurement method have been developed for precise measurement of surface wave velocity. This measurement method is used to investigate the acoustoelastic effects for waves propagating on the surface of a polymethylmethacrylate (PMMA) sample. The transducer uses two miniature conical PZT elements for acoustic wave transmitter and receiver on the sample surface; hence, it can be viewed as a point-source/point-receiver transducer. Acoustic waves are excited and detected with the PZT elements, and the wave velocity can be accurately determined with a cross-correlation waveform comparison method. The transducer and its measurement method are particularly sensitive and accurate in determining small changes in wave velocity; therefore, they are applied to the measurement of acoustoelastic effects in PMMA materials. Both the surface skimming longitudinal wave and Rayleigh surface wave can be simultaneously excited and measured. With a uniaxial-loaded PMMA sample, both acoustoelastic effects for surface skimming longitudinal wave and Rayleigh waves of PMMA are measured. The acoustoelastic coefficients for both types of surface wave motions are simultaneously determined. The transducer and its measurement method provide a practical way for measuring surface stresses nondestructively.

  5. A dose assessment method for arbitrary geometries with virtual reality in the nuclear facilities decommissioning

    NASA Astrophysics Data System (ADS)

    Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu

    2018-03-01

    During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.

  6. Study on super-resolution three-dimensional range-gated imaging technology

    NASA Astrophysics Data System (ADS)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  7. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, A

    Purpose: Accuboost treatment planning uses dwell times from a nomogram designed with Monte Carlo calculations for round and D-shaped applicators. A quick dose calculation method has been developed for verification of the HDR Brachytherapy dose as a second check. Methods: Accuboost breast treatment uses several round and D-shaped applicators to be used non-invasively with an Ir-192 source from a HDR Brachytherapy afterloader after the breast is compressed in a mammographic unit for localization. The breast thickness, source activity, the prescription dose and the applicator size are entered into a nomogram spreadsheet which gives the dwell times to be manually enteredmore » into the delivery computer. Approximating the HDR Ir-192 as a point source, and knowing the geometry of the round and D-applicators, the distances from the source positions to the midpoint of the central plane are calculated. Using the exposure constant of Ir-192 and medium as human tissue, the dose at a point is calculated as: D(cGy) = 1.254 × A × t/R2, where A is the activity in Ci, t is the dwell time in sec and R is the distance in cm. The dose from each dwell position is added to get the total dose. Results: Each fraction is delivered in two compressions: cranio-caudally and medial-laterally. A typical APBI treatment in 10 fractions requires 20 compressions. For a patient treated with D45 applicators and an average of 5.22 cm thickness, this calculation was 1.63 % higher than the prescription. For another patient using D53 applicators in the CC direction and 7 cm SDO applicators in the ML direction, this calculation was 1.31 % lower than the prescription. Conclusion: This is a simple and quick method to double check the dose on the central plane for Accuboost treatment.« less

  9. Machine learning in infrared object classification - an all-sky selection of YSO candidates

    NASA Astrophysics Data System (ADS)

    Marton, Gabor; Zahorecz, Sarolta; Toth, L. Viktor; Magnus McGehee, Peregrine; Kun, Maria

    2015-08-01

    Object classification is a fundamental and challenging problem in the era of big data. I will discuss up-to-date methods and their application to classify infrared point sources.We analysed the ALLWISE catalogue, the most recent public source catalogue of the Wide-field Infrared Survey Explorer (WISE) to compile a reliable list of Young Stellar Object (YSO) candidates. We tested and compared classical and up-to-date statistical methods as well, to discriminate source types like extragalactic objects, evolved stars, main sequence stars, objects related to the interstellar medium and YSO candidates by using their mid-IR WISE properties and associated near-IR 2MASS data.In the particular classification problem the Support Vector Machines (SVM), a class of supervised learning algorithm turned out to be the best tool. As a result we classify Class I and II YSOs with >90% accuracy while the fraction of contaminating extragalactic objects remains well below 1%, based on the number of known objects listed in the SIMBAD and VizieR databases. We compare our results to other classification schemes from the literature and show that the SVM outperforms methods that apply linear cuts on the colour-colour and colour-magnitude space. Our homogenous YSO candidate catalog can serve as an excellent pathfinder for future detailed observations of individual objects and a starting point of statistical studies that aim to add pieces to the big picture of star formation theory.

  10. General point dipole theory for periodic metasurfaces: magnetoelectric scattering lattices coupled to planar photonic structures.

    PubMed

    Chen, Yuntian; Zhang, Yan; Femius Koenderink, A

    2017-09-04

    We study semi-analytically the light emission and absorption properties of arbitrary stratified photonic structures with embedded two-dimensional magnetoelectric point scattering lattices, as used in recent plasmon-enhanced LEDs and solar cells. By employing dyadic Green's function for the layered structure in combination with the Ewald lattice summation to deal with the particle lattice, we develop an efficient method to study the coupling between planar 2D scattering lattices of plasmonic, or metamaterial point particles, coupled to layered structures. Using the 'array scanning method' we deal with localized sources. Firstly, we apply our method to light emission enhancement of dipole emitters in slab waveguides, mediated by plasmonic lattices. We benchmark the array scanning method against a reciprocity-based approach to find that the calculated radiative rate enhancement in k-space below the light cone shows excellent agreement. Secondly, we apply our method to study absorption-enhancement in thin-film solar cells mediated by periodic Ag nanoparticle arrays. Lastly, we study the emission distribution in k-space of a coupled waveguide-lattice system. In particular, we explore the dark mode excitation on the plasmonic lattice using the so-called array scanning method. Our method could be useful for simulating a broad range of complex nanophotonic structures, i.e., metasurfaces, plasmon-enhanced light emitting systems and photovoltaics.

  11. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  12. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  13. DISCRIMINATION OF NATURAL AND NON-POINT SOURCE EFFECTS FROM ANTHROGENIC EFFECTS AS REFLECTED IN BENTHIC STATE IN THREE ESTUARIES IN NEW ENGLAND

    EPA Science Inventory

    In order to protect estuarine resources, managers must be able to discern the effects of natural conditions and non-point source effects, and separate them from multiple anthropogenic point source effects. Our approach was to evaluate benthic community assemblages, riverine nitro...

  14. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  15. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  16. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  17. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  18. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  19. A NEW RESULT ON THE ORIGIN OF THE EXTRAGALACTIC GAMMA-RAY BACKGROUND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Ming; Wang Jiancheng, E-mail: mzhou@ynao.ac.cn

    2013-06-01

    In this paper, we repeatedly use the method of image stacking to study the origin of the extragalactic gamma-ray background (EGB) at GeV bands, and find that the Faint Images of the Radio Sky at Twenty centimeters (FIRST) sources undetected by the Large Area Telescope on the Fermi Gamma-ray Space Telescope can contribute about (56 {+-} 6)% of the EGB. Because FIRST is a flux-limited sample of radio sources with incompleteness at the faint limit, we consider that point sources, including blazars, non-blazar active galactic nuclei, and starburst galaxies, could produce a much larger fraction of the EGB.

  20. Recent skyshine calculations at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degtyarenko, P.

    1997-12-01

    New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less

Top