Sample records for source distribution functions

  1. Studies of the Intrinsic Complexities of Magnetotail Ion Distributions: Theory and Observations

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, Maha

    1998-01-01

    This year we have studied the relationship between the structure seen in measured distribution functions and the detailed magnetospheric configuration. Results from our recent studies using time-dependent large-scale kinetic (LSK) calculations are used to infer the sources of the ions in the velocity distribution functions measured by a single spacecraft (Geotail). Our results strongly indicate that the different ion sources and acceleration mechanisms producing a measured distribution function can explain this structure. Moreover, individual structures within distribution functions were traced back to single sources. We also confirmed the fractal nature of ion distributions.

  2. Dosimetric characterizations of GZP6 60Co high dose rate brachytherapy sources: application of superimposition method

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani

    2012-01-01

    Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455

  3. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    NASA Astrophysics Data System (ADS)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  4. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...

  5. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...

  6. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...

  7. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-04-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.

  8. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2016-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.

  9. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...

  10. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...

  11. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...

  12. Dependence of Microlensing on Source Size and Lens Mass

    NASA Astrophysics Data System (ADS)

    Congdon, A. B.; Keeton, C. R.

    2007-11-01

    In gravitational lensed quasars, the magnification of an image depends on the configuration of stars in the lensing galaxy. We study the statistics of the magnification distribution for random star fields. The width of the distribution characterizes the amount by which the observed magnification is likely to differ from models in which the mass is smoothly distributed. We use numerical simulations to explore how the width of the magnification distribution depends on the mass function of stars, and on the size of the source quasar. We then propose a semi-analytic model to describe the distribution width for different source sizes and stellar mass functions.

  13. Naima: a Python package for inference of particle distribution properties from nonthermal spectra

    NASA Astrophysics Data System (ADS)

    Zabalza, V.

    2015-07-01

    The ultimate goal of the observation of nonthermal emission from astrophysical sources is to understand the underlying particle acceleration and evolution processes, and few tools are publicly available to infer the particle distribution properties from the observed photon spectra from X-ray to VHE gamma rays. Here I present naima, an open source Python package that provides models for nonthermal radiative emission from homogeneous distribution of relativistic electrons and protons. Contributions from synchrotron, inverse Compton, nonthermal bremsstrahlung, and neutral-pion decay can be computed for a series of functional shapes of the particle energy distributions, with the possibility of using user-defined particle distribution functions. In addition, naima provides a set of functions that allow to use these models to fit observed nonthermal spectra through an MCMC procedure, obtaining probability distribution functions for the particle distribution parameters. Here I present the models and methods available in naima and an example of their application to the understanding of a galactic nonthermal source. naima's documentation, including how to install the package, is available at http://naima.readthedocs.org.

  14. Standardization of Broadband UV Measurements for 365 nm LED Sources

    PubMed Central

    Eppeldauer, George P.

    2012-01-01

    Broadband UV measurements are evaluated when UV-A irradiance meters measure optical radiation from 365 nm UV sources. The CIE standardized rectangular-shape UV-A function can be realized only with large spectral mismatch errors. The spectral power-distribution of the 365 nm excitation source is not standardized. Accordingly, the readings made with different types of UV meters, even if they measure the same UV source, can be very different. Available UV detectors and UV meters were measured and evaluated for spectral responsivity. The spectral product of the source-distribution and the meter’s spectral-responsivity were calculated for different combinations to estimate broad-band signal-measurement errors. Standardization of both the UV source-distribution and the meter spectral-responsivity is recommended here to perform uniform broad-band measurements with low uncertainty. It is shown what spectral responsivity function(s) is needed for new and existing UV irradiance meters to perform low-uncertainty broadband 365 nm measurements. PMID:26900516

  15. Light source distribution and scattering phase function influence light transport in diffuse multi-layered media

    NASA Astrophysics Data System (ADS)

    Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine

    2017-06-01

    Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purwaningsih, Anik

    Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result ofmore » calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.« less

  17. Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco

    2016-08-01

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.

  18. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  19. Full Waveform Inversion Using Student's t Distribution: a Numerical Study for Elastic Waveform Inversion and Simultaneous-Source Method

    NASA Astrophysics Data System (ADS)

    Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki

    2015-06-01

    Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.

  20. Analysis and attenuation of artifacts caused by spatially and temporally correlated noise sources in Green's function estimates

    NASA Astrophysics Data System (ADS)

    Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.

    2016-12-01

    Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.

  1. Towards full waveform ambient noise inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas

    2018-01-01

    In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure are quantified using Hessian-vector products.

  2. Renormalizability of quasiparton distribution functions

    DOE PAGES

    Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei; ...

    2017-11-21

    Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less

  3. Renormalizability of quasiparton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei

    Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less

  4. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE PAGES

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...

    2016-07-29

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  5. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  6. Disentangling the major source areas for an intense aerosol advection in the Central Mediterranean on the basis of Potential Source Contribution Function modeling of chemical and size distribution measurements

    NASA Astrophysics Data System (ADS)

    Petroselli, Chiara; Crocchianti, Stefano; Moroni, Beatrice; Castellini, Silvia; Selvaggi, Roberta; Nava, Silvia; Calzolai, Giulia; Lucarelli, Franco; Cappelletti, David

    2018-05-01

    In this paper, we combined a Potential Source Contribution Function (PSCF) analysis of daily chemical aerosol composition data with hourly aerosol size distributions with the aim to disentangle the major source areas during a complex and fast modulating advection event impacting on Central Italy in 2013. Chemical data include an ample set of metals obtained by Proton Induced X-ray Emission (PIXE), main soluble ions from ionic chromatography and elemental and organic carbon (EC, OC) obtained by thermo-optical measurements. Size distributions have been recorded with an optical particle counter for eight calibrated size classes in the 0.27-10 μm range. We demonstrated the usefulness of the approach by the positive identification of two very different source areas impacting during the transport event. In particular, biomass burning from Eastern Europe and desert dust from Sahara sources have been discriminated based on both chemistry and size distribution time evolution. Hourly BT provided the best results in comparison to 6 h or 24 h based calculations.

  7. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  8. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  9. Elemental composition and size distribution of particulates in Cleveland, Ohio

    NASA Technical Reports Server (NTRS)

    King, R. B.; Fordyce, J. S.; Neustadter, H. E.; Leibecki, H. F.

    1975-01-01

    Measurements were made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-state) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.

  10. Elemental composition and size distribution of particulates in Cleveland, Ohio

    NASA Technical Reports Server (NTRS)

    Leibecki, H. F.; King, R. B.; Fordyce, J. S.; Neustadter, H. E.

    1975-01-01

    Measurements have been made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-stage) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.

  11. SU-F-19A-05: Experimental and Monte Carlo Characterization of the 1 Cm CivaString 103Pd Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J; Micka, J; Culberson, W

    Purpose: To determine the in-air azimuthal anisotropy and in-water dose distribution for the 1 cm length of the CivaString {sup 103}Pd brachytherapy source through measurements and Monte Carlo (MC) simulations. American Association of Physicists in Medicine Task Group No. 43 (TG-43) dosimetry parameters were also determined for this source. Methods: The in-air azimuthal anisotropy of the source was measured with a NaI scintillation detector and simulated with the MCNP5 radiation transport code. Measured and simulated results were normalized to their respective mean values and compared. The TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function for this sourcemore » were determined from LiF:Mg,Ti thermoluminescent dosimeter (TLD) measurements and MC simulations. The impact of {sup 103}Pd well-loading variability on the in-water dose distribution was investigated using MC simulations by comparing the dose distribution for a source model with four wells of equal strength to that for a source model with strengths increased by 1% for two of the four wells. Results: NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy showed that ≥95% of the normalized data were within 1.2% of the mean value. TLD measurements and MC simulations of the TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function agreed to within the experimental TLD uncertainties (k=2). MC simulations showed that a 1% variability in {sup 103}Pd well-loading resulted in changes of <0.1%, <0.1%, and <0.3% in the TG-43 dose-rate constant, radial dose distribution, and polar dose distribution, respectively. Conclusion: The CivaString source has a high degree of azimuthal symmetry as indicated by the NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy. TG-43 dosimetry parameters for this source were determined from TLD measurements and MC simulations. {sup 103}Pd well-loading variability results in minimal variations in the in-water dose distribution according to MC simulations. This work was partially supported by CivaTech Oncology, Inc. through an educational grant for Joshua Reed, John Micka, Wesley Culberson, and Larry DeWerd and through research support for Mark Rivard.« less

  12. Theory and Performance of AIMS for Active Interrogation

    NASA Astrophysics Data System (ADS)

    Walters, William J.; Royston, Katherine E. K.; Haghighat, Alireza

    2014-06-01

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) determination of neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, γ) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water. In the first step, a response-function formulation has been developed to calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, γ) cross sections to find the resulting gamma source distribution. Finally, in the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma flux at a detector window. A code, AIMS (Active Interrogation for Monitoring Special-Nuclear-materials), has been written to output the gamma current for an source-detector assembly scanning across the cargo using the pre-calculated values and takes significantly less time than a reference MCNP5 calculation.

  13. ALMA observations of lensed Herschel sources: testing the dark matter halo paradigm

    NASA Astrophysics Data System (ADS)

    Amvrosiadis, A.; Eales, S. A.; Negrello, M.; Marchetti, L.; Smith, M. W. L.; Bourne, N.; Clements, D. L.; De Zotti, G.; Dunne, L.; Dye, S.; Furlanetto, C.; Ivison, R. J.; Maddox, S. J.; Valiante, E.; Baes, M.; Baker, A. J.; Cooray, A.; Crawford, S. M.; Frayer, D.; Harris, A.; Michałowski, M. J.; Nayyeri, H.; Oliver, S.; Riechers, D. A.; Serjeant, S.; Vaccari, M.

    2018-04-01

    With the advent of wide-area submillimetre surveys, a large number of high-redshift gravitationally lensed dusty star-forming galaxies have been revealed. Because of the simplicity of the selection criteria for candidate lensed sources in such surveys, identified as those with S500 μm > 100 mJy, uncertainties associated with the modelling of the selection function are expunged. The combination of these attributes makes submillimetre surveys ideal for the study of strong lens statistics. We carried out a pilot study of the lensing statistics of submillimetre-selected sources by making observations with the Atacama Large Millimeter Array (ALMA) of a sample of strongly lensed sources selected from surveys carried out with the Herschel Space Observatory. We attempted to reproduce the distribution of image separations for the lensed sources using a halo mass function taken from a numerical simulation that contains both dark matter and baryons. We used three different density distributions, one based on analytical fits to the haloes formed in the EAGLE simulation and two density distributions [Singular Isothermal Sphere (SIS) and SISSA] that have been used before in lensing studies. We found that we could reproduce the observed distribution with all three density distributions, as long as we imposed an upper mass transition of ˜1013 M⊙ for the SIS and SISSA models, above which we assumed that the density distribution could be represented by a Navarro-Frenk-White profile. We show that we would need a sample of ˜500 lensed sources to distinguish between the density distributions, which is practical given the predicted number of lensed sources in the Herschel surveys.

  14. Temperature field determination in slabs, circular plates and spheres with saw tooth heat generating sources

    NASA Astrophysics Data System (ADS)

    Diestra Cruz, Heberth Alexander

    The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.

  15. The spatial coherence function in scanning transmission electron microscopy and spectroscopy.

    PubMed

    Nguyen, D T; Findlay, S D; Etheridge, J

    2014-11-01

    We investigate the implications of the form of the spatial coherence function, also referred to as the effective source distribution, for quantitative analysis in scanning transmission electron microscopy, and in particular for interpreting the spatial origin of imaging and spectroscopy signals. These questions are explored using three different source distribution models applied to a GaAs crystal case study. The shape of the effective source distribution was found to have a strong influence not only on the scanning transmission electron microscopy (STEM) image contrast, but also on the distribution of the scattered electron wavefield and hence on the spatial origin of the detected electron intensities. The implications this has for measuring structure, composition and bonding at atomic resolution via annular dark field, X-ray and electron energy loss STEM imaging are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Method and system using power modulation and velocity modulation producing sputtered thin films with sub-angstrom thickness uniformity or custom thickness gradients

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Walton, Christopher Charles [Berkeley, CA

    2003-12-23

    A method and system for determining a source flux modulation recipe for achieving a selected thickness profile of a film to be deposited (e.g., with highly uniform or highly accurate custom graded thickness) over a flat or curved substrate (such as concave or convex optics) by exposing the substrate to a vapor deposition source operated with time-varying flux distribution as a function of time. Preferably, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. Preferably, the method includes the steps of measuring the source flux distribution (using a test piece held stationary while exposed to the source with the source operated at each of a number of different applied power levels), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of source flux modulation recipes, and determining from the predicted film thickness profiles a source flux modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal source flux modulation recipe to achieve a desired thickness profile on a substrate. The method enables precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  17. Point spread functions for earthquake source imaging: An interpretation based on seismic interferometry

    USGS Publications Warehouse

    Nakahara, Hisashi; Haney, Matt

    2015-01-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  18. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  19. Development and application of a hybrid transport methodology for active interrogation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royston, K.; Walters, W.; Haghighat, A.

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, 7) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water cargo. To complete the first step, a response-function formulation has been developed tomore » calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, 7) cross sections to find the resulting gamma source distribution. In the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma current at a detector window. The AIMS (Active Interrogation for Monitoring Special-Nuclear-Materials) software has been written to output the gamma current for a source-detector assembly scanning across a cargo container using the pre-calculated values and taking significantly less time than a reference MCNP5 calculation. (authors)« less

  20. Aeroacoustic catastrophes: upstream cusp beaming in Lilley's equation.

    PubMed

    Stone, J T; Self, R H; Howls, C J

    2017-05-01

    The downstream propagation of high-frequency acoustic waves from a point source in a subsonic jet obeying Lilley's equation is well known to be organized around the so-called 'cone of silence', a fold catastrophe across which the amplitude may be modelled uniformly using Airy functions. Here we show that acoustic waves not only unexpectedly propagate upstream, but also are organized at constant distance from the point source around a cusp catastrophe with amplitude modelled locally by the Pearcey function. Furthermore, the cone of silence is revealed to be a cross-section of a swallowtail catastrophe. One consequence of these discoveries is that the peak acoustic field upstream is not only structurally stable but also at a similar level to the known downstream field. The fine structure of the upstream cusp is blurred out by distributions of symmetric acoustic sources, but peak upstream acoustic beaming persists when asymmetries are introduced, from either arrays of discrete point sources or perturbed continuum ring source distributions. These results may pose interesting questions for future novel jet-aircraft engine designs where asymmetric source distributions arise.

  1. Analyzing spatial coherence using a single mobile field sensor.

    PubMed

    Fridman, Peter

    2007-04-01

    According to the Van Cittert-Zernike theorem, the intensity distribution of a spatially incoherent source and the mutual coherence function of the light impinging on two wave sensors are related. It is the comparable relationship using a single mobile sensor moving at a certain velocity relative to the source that is calculated in this paper. The auto-corelation function of the electric field at the sensor contains information about the intensity distribution. This expression could be employed in aperture synthesis.

  2. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.

  3. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  4. The source of electrostatic fluctuations in the solar-wind

    NASA Technical Reports Server (NTRS)

    Lemons, D. S.; Asbridge, J. R.; Bame, S. J.; Feldman, W. C.; Gary, S. P.; Gosling, J. T.

    1979-01-01

    Solar wind electron and ion distribution functions measured simultaneously with or close to times of intense electrostatic fluctuations are subjected to a linear Vlasov stability analysis. Although all distributions tested were found to be stable, the analysis suggests that the ion beam instability is the most likely source of the fluctuations.

  5. Improved mapping of radio sources from VLBI data by least-square fit

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.

    1985-01-01

    A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.

  6. The Relation between School Leadership from a Distributed Perspective and Teachers' Organizational Commitment: Examining the Source of the Leadership Function

    ERIC Educational Resources Information Center

    Hulpia, Hester; Devos, Geert; Van Keer, Hilde

    2011-01-01

    Purpose: In this study the relationship between school leadership and teachers' organizational commitment is examined by taking into account a distributed leadership perspective. The relation between teachers' organizational commitment and contextual variables of teachers' perceptions of the quality and the source of the supportive and supervisory…

  7. The Bivariate Luminosity--HI Mass Distribution Function of Galaxies based on the NIBLES Survey

    NASA Astrophysics Data System (ADS)

    Butcher, Zhon; Schneider, Stephen E.; van Driel, Wim; Lehnert, Matt

    2016-01-01

    We use 21cm HI line observations for 2610 galaxies from the Nançay Interstellar Baryons Legacy Extragalactic Survey (NIBLES) to derive a bivariate luminosity--HI mass distribution function. Our HI survey was selected to randomly probe the local (900 < cz < 12,000 km/s) galaxy population in each 0.5 mag wide bin for the absolute z-band magnitude range of -13.5 < Mz < -24 without regard to morphology or color. This targeted survey allowed more on-source integration time for weak and non-detected sources, enabling us to probe lower HI mass fractions and apply lower upper limits for non-detections than would be possible with the larger blind HI surveys. Additionally, we obtained a factor of four higher sensitivity follow-up observations at Arecibo of 90 galaxies from our non-detected and marginally detected categories to quantify the underlying HI distribution of sources not detected at Nançay. Using the optical luminosity function and our higher sensitivity follow up observations as priors, we use a 2D stepwise maximum likelihood technique to derive the two dimensional volume density distribution of luminosity and HI mass in each SDSS band.

  8. Towards a Full Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2015-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.

  9. Distribution functions of air-scattered gamma rays above isotropic plane sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael, J A; Lamonds, H A

    1967-06-01

    Using the moments method of Spencer and Fano and a reconstruction technique suggested by Berger, the authors have calculated energy and angular distribution functions for air-scattered gamma rays emitied from infinite-plane isotropic monoenergetic sources as iunctions of source energy, radiation incidence angle at the detector, and detector altitude. Incremental and total buildup factors have been calculated for both number and exposure. The results are presented in tabular form for a detector located at altitudes of 3, 50, 100, 200, 300, 400, 500, and 1000 feet above source planes of 15 discrete energies spanning the range of 0.1 to 3.0 MeV.more » Calculational techniques including results of sensitivity studies are discussed and plots of typical results are presented. (auth)« less

  10. Integral-moment analysis of the BATSE gamma-ray burst intensity distribution

    NASA Technical Reports Server (NTRS)

    Horack, John M.; Emslie, A. Gordon

    1994-01-01

    We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.

  11. Incorporation of a spatial source distribution and a spatial sensor sensitivity in a laser ultrasound propagation model using a streamlined Huygens' principle.

    PubMed

    Laloš, Jernej; Babnik, Aleš; Možina, Janez; Požar, Tomaž

    2016-03-01

    The near-field, surface-displacement waveforms in plates are modeled using interwoven concepts of Green's function formalism and streamlined Huygens' principle. Green's functions resemble the building blocks of the sought displacement waveform, superimposed and weighted according to the simplified distribution. The approach incorporates an arbitrary circular spatial source distribution and an arbitrary circular spatial sensitivity in the area probed by the sensor. The displacement histories for uniform, Gaussian and annular normal-force source distributions and the uniform spatial sensor sensitivity are calculated, and the corresponding weight distributions are compared. To demonstrate the applicability of the developed scheme, measurements of laser ultrasound induced solely by the radiation pressure are compared with the calculated waveforms. The ultrasound is induced by laser pulse reflection from the mirror-surface of a glass plate. The measurements show excellent agreement not only with respect to various wave-arrivals but also in the shape of each arrival. Their shape depends on the beam profile of the excitation laser pulse and its corresponding spatial normal-force distribution. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Excitation functions of parameters extracted from three-source (net-)proton rapidity distributions in Au-Au and Pb-Pb collisions over an energy range from AGS to RHIC

    NASA Astrophysics Data System (ADS)

    Gao, Li-Na; Liu, Fu-Hu; Sun, Yan; Sun, Zhu; Lacey, Roy A.

    2017-03-01

    Experimental results of the rapidity spectra of protons and net-protons (protons minus antiprotons) emitted in gold-gold (Au-Au) and lead-lead (Pb-Pb) collisions, measured by a few collaborations at the alternating gradient synchrotron (AGS), super proton synchrotron (SPS), and relativistic heavy ion collider (RHIC), are described by a three-source distribution. The values of the distribution width σC and fraction kC of the central rapidity region, and the distribution width σF and rapidity shift Δ y of the forward/backward rapidity regions, are then obtained. The excitation function of σC increases generally with increase of the center-of-mass energy per nucleon pair √{s_{NN}}. The excitation function of σF shows a saturation at √{s_{NN}}=8.8 GeV. The excitation function of kC shows a minimum at √{s_{NN}}=8.8 GeV and a saturation at √{s_{NN}} ≈ 17 GeV. The excitation function of Δ y increases linearly with ln(√{s_{NN}}) in the considered energy range.

  13. Global excitation of wave phenomena in a dissipative multiconstituent medium. I - Transfer function of the earth's thermosphere. II - Impulsive perturbations in the earth's thermosphere

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Harris, I.; Herrero, F. A.; Varosi, F.

    1984-01-01

    A transfer function approach is taken in constructing a spectral model of the acoustic-gravity wave response in a multiconstituent thermosphere. The model is then applied to describing the thermospheric response to various sources around the globe. Zonal spherical harmonics serve to model the horizontal variations in propagating waves which, when integrated with respect to height, generate a transfer function for a vertical source distribution in the thermosphere. Four wave components are characterized as resonance phenomena and are associated with magnetic activity and ionospheric disturbances. The waves are either trapped or propagate, the latter becoming significant when possessing frequencies above 3 cycles/day. The energy input is distributed by thermospheric winds. The disturbances decay slowly, mainly due to heat conduction and diffusion. Gravity waves appear abruptly and are connected to a sudden switching on or off of a source. Turn off of a source coincides with a reversal of the local atmospheric circulation.

  14. Transient difference solutions of the inhomogeneous wave equation - Simulation of the Green's function

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1983-01-01

    A time-dependent finite difference formulation to the inhomogeneous wave equation is derived for plane wave propagation with harmonic noise sources. The difference equation and boundary conditions are developed along with the techniques to simulate the Dirac delta function associated with a concentrated noise source. Example calculations are presented for the Green's function and distributed noise sources. For the example considered, the desired Fourier transformed acoustic pressures are determined from the transient pressures by use of a ramping function and an integration technique, both of which eliminates the nonharmonic pressure associated with the initial transient.

  15. Transient difference solutions of the inhomogeneous wave equation: Simulation of the Green's function

    NASA Technical Reports Server (NTRS)

    Baumeiste, K. J.

    1983-01-01

    A time-dependent finite difference formulation to the inhomogeneous wave equation is derived for plane wave propagation with harmonic noise sources. The difference equation and boundary conditions are developed along with the techniques to simulate the Dirac delta function associated with a concentrated noise source. Example calculations are presented for the Green's function and distributed noise sources. For the example considered, the desired Fourier transformed acoustic pressures are determined from the transient pressures by use of a ramping function and an integration technique, both of which eliminates the nonharmonic pressure associated with the initial transient.

  16. Electron temperature profiles in axial field 2.45 GHz ECR ion source with a ceramic chamber

    NASA Astrophysics Data System (ADS)

    Abe, K.; Tamura, R.; Kasuya, T.; Wada, M.

    2017-08-01

    An array of electrostatic probes was arranged on the plasma electrode of a 2.45 GHz microwave driven axial magnetic filter field type negative hydrogen (H-) ion source to clarify the spatial plasma distribution near the electrode. The measured spatial distribution of electron temperature indicated the lower temperature near the extraction hole of the plasma electrode corresponding to the effectiveness of the axial magnetic filter field geometry. When the ratio of electron saturation current to the ion saturation current was plotted as a function of position, the obtained distribution showed a higher ratio near the hydrogen gas inlet through which ground state hydrogen molecules are injected into the source. Though the efficiency in producing H- ions is smaller for a 2.45 GHz source than a source operated at 14 GHz, it gives more volume to measure spatial distributions of various plasma parameters to understand fundamental processes that are influential on H- production in this type of ion sources.

  17. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  18. Computational methods for analyzing the transmission characteristics of a beta particle magnetic analysis system

    NASA Technical Reports Server (NTRS)

    Singh, J. J.

    1979-01-01

    Computational methods were developed to study the trajectories of beta particles (positrons) through a magnetic analysis system as a function of the spatial distribution of the radionuclides in the beta source, size and shape of the source collimator, and the strength of the analyzer magnetic field. On the basis of these methods, the particle flux, their energy spectrum, and source-to-target transit times have been calculated for Na-22 positrons as a function of the analyzer magnetic field and the size and location of the target. These data are in studies requiring parallel beams of positrons of uniform energy such as measurement of the moisture distribution in composite materials. Computer programs for obtaining various trajectories are included.

  19. Visualization of Green's Function Anomalies for Megathrust Source in Nankai Trough by Reciprocity Method

    NASA Astrophysics Data System (ADS)

    Petukhin, A.; Miyakoshi, K.; Tsurugi, M.; Kawase, H.; Kamae, K.

    2014-12-01

    Effect of various areas (asperities or SMGA) in the source of a megathrust subduction zone earthquake on the simulated long-period ground motions is studied. For this case study we employed a source fault model proposed by HERP (2012) for future M9-class event in the Nankai trough. Velocity structure is 3-D JIVSM model developed for long-period ground motion simulations. The target site OSKH02 "Konohana" is located in center of the Osaka basin. Green's functions for large number of sub-sources (>1000) were calculated by FDM using the reciprocity approach. Depths, strike and dip angles of sub-sources are adjusted to the shape of upper boundary of the Philippine Sea plate. The target period range is 4-20sec. Strongly nonuniform distribution of peak amplitudes of Green's functions is observed (see Figure), and two areas have anomalously large amplitudes: (1) a large along-strike elongated area just south of Kii peninsula and (2) a similar area south of Kii peninsula but shifted toward the Nankai trough. Elongation of the first anomaly fits well 10-15km isolines of the depth distribution of the Philippine Sea plate, while target site is located in the direction perpendicular to these isolines. For this reason, preliminarily we suppose that plate shape may have critical effect on the simulated ground motions, via a cumulative effect of sub-source radiation patterns and specific strike and dip angle distributions. Analysis of the time delay of the peak arrivals at OKSH02 demonstrates that Green's functions from the second anomaly, located in shallow part of plate boundary, are mostly composed of surface waves.

  20. Computation of marginal distributions of peak-heights in electropherograms for analysing single source and mixture STR DNA samples.

    PubMed

    Cowell, Robert G

    2018-05-04

    Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. The Generation, Radiation and Prediction of Supersonic Jet Noise. Volume 1

    DTIC Science & Technology

    1978-10-01

    standard, Gaussian correlation function model can yield a good noise spectrum prediction (at 900), but the corresponding axial source distributions do not...forms for the turbulence cross-correlation function. Good agreement was obtained between measured and calculated far- field noise spectra. However, the...complementary error function profile (3.63) was found to provide a good fit to the axial velocity distribution tor a wide range of Mach numbers in the Initial

  2. Calculations of the Electron Energy Distribution Function in a Uranium Plasma by Analytic and Monte Carlo Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bathke, C. G.

    1976-01-01

    Electron energy distribution functions were calculated in a U235 plasma at 1 atmosphere for various plasma temperatures and neutron fluxes. The distributions are assumed to be a summation of a high energy tail and a Maxwellian distribution. The sources of energetic electrons considered are the fission-fragment induced ionization of uranium and the electron induced ionization of uranium. The calculation of the high energy tail is reduced to an electron slowing down calculation, from the most energetic source to the energy where the electron is assumed to be incorporated into the Maxwellian distribution. The pertinent collisional processes are electron-electron scattering and electron induced ionization and excitation of uranium. Two distinct methods were employed in the calculation of the distributions. One method is based upon the assumption of continuous slowing and yields a distribution inversely proportional to the stopping power. An iteration scheme is utilized to include the secondary electron avalanche. In the other method, a governing equation is derived without assuming continuous electron slowing. This equation is solved by a Monte Carlo technique.

  3. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  4. Spectral characteristics of light sources for S-cone stimulation.

    PubMed

    Schlegelmilch, F; Nolte, R; Schellhorn, K; Husar, P; Henning, G; Tornow, R P

    2002-11-01

    Electrophysiological investigations of the short-wavelength sensitive pathway of the human eye require the use of a suitable light source as a S-cone stimulator. Different light sources with their spectral distribution properties were investigated and compared with the ideal S-cone stimulator. First, the theoretical background of the calculation of relative cone energy absorption from the spectral distribution function of the light source is summarized. From the results of the calculation, the photometric properties of the ideal S-cone stimulator will be derived. The calculation procedure was applied to virtual light sources (computer generated spectral distribution functions with different medium wavelengths and spectrum widths) and to real light sources (blue and green light emitting diodes, blue phosphor of CRT-monitor, multimedia projector, LCD monitor and notebook display). The calculated relative cone absorbencies are compared to the conditions of an ideal S-cone stimulator. Monochromatic light sources with wavelengths of less than 456 nm are close to the conditions of an ideal S-cone stimulator. Spectrum widths up to 21 nm do not affect the S-cone activation significantly (S-cone activation change < 0.2%). Blue light emitting diodes with peak wavelength at 448 nm and spectrum bandwidth of 25 nm are very useful for S-cone stimulation (S-cone activation approximately 95%). A suitable display for S-cone stimulation is the Trinitron computer monitor (S-cone activation approximately 87%). The multimedia projector has a S-cone activation up to 91%, but their spectral distribution properties depends on the selected intensity. LCD monitor and notebook displays have a lower S-cone activation (< or = 74%). Carefully selecting the blue light source for S-cone stimulation can reduce the unwanted L-and M-cone activation down to 4% for M-cones and 1.5% for L-cones.

  5. Subtle Change in the Charge Distribution of Surface Residues May Affect the Secondary Functions of Cytochrome c*

    PubMed Central

    Paul, Simanta Sarani; Sil, Pallabi; Haldar, Shubhasis; Mitra, Samaresh; Chattopadhyay, Krishnananda

    2015-01-01

    Although the primary function of cytochrome c (cyt c) is electron transfer, the protein caries out an additional secondary function involving its interaction with membrane cardiolipin (CDL), its peroxidase activity, and the initiation of apoptosis. Whereas the primary function of cyt c is essentially conserved, its secondary function varies depending on the source of the protein. We report here a detailed experimental and computational study, which aims to understand, at the molecular level, the difference in the secondary functions of cyt c obtained from horse heart (mammalian) and Saccharomyces cerevisiae (yeast). The conformational landscape of cyt c has been found to be heterogeneous, consisting of an equilibrium between the compact and extended conformers as well as the oligomeric species. Because the determination of relative populations of these conformers is difficult to obtain by ensemble measurements, we used fluorescence correlation spectroscopy (FCS), a method that offers single-molecule resolution. The population of different species is found to depend on multiple factors, including the protein source, the presence of CDL and urea, and their concentrations. The complex interplay between the conformational distribution and oligomerization plays a crucial role in the variation of the pre-apoptotic regulation of cyt c observed from different sources. Finally, computational studies reveal that the variation in the charge distribution at the surface and the charge reversal sites may be the key determinant of the conformational stability of cyt c. PMID:25873393

  6. Representations and uses of light distribution functions

    NASA Astrophysics Data System (ADS)

    Lalonde, Paul Albert

    1998-11-01

    At their lowest level, all rendering algorithms depend on models of local illumination to define the interplay of light with the surfaces being rendered. These models depend both on the representations of light scattering at a surface due to reflection and to an equal extent on the representation of light sources and light fields. Both emission and reflection have in common that they describe how light leaves a surface as a function of direction. Reflection also depends on an incident light direction. Emission can depend on the position on the light source We call the functions representing emission and reflection light distribution functions (LDF's). There are some difficulties to using measured light distribution functions. The data sets are very large-the size of the data grows with the fourth power of the sampling resolution. For example, a bidirectional reflectance distribution function (BRDF) sampled at five degrees angular resolution, which is arguably insufficient to capture highlights and other high frequency effects in the reflection, can easily require one and a half million samples. Once acquired this data requires some form of interpolation to use them. Any compression method used must be efficient, both in space and in the time required to evaluate the function at a point or over a range of points. This dissertation examines a wavelet representation of light distribution functions that addresses these issues. A data structure is presented that allows efficient reconstruction of LDFs for a given set of parameters, making the wavelet representation feasible for rendering tasks. Texture mapping methods that take advantage of our LDF representations are examined, as well as techniques for filtering LDFs, and methods for using wavelet compressed bidirection reflectance distribution functions (BRDFs) and light sources with Monte Carlo path tracing algorithms. The wavelet representation effectively compresses BRDF and emission data while inducing only a small error in the reconstructed signal. The representation can be used to evaluate efficiently some integrals that appear in shading computation which allows fast, accurate computation of local shading. The representation can be used to represent light fields and is used to reconstruct views of environments interactively from a precomputed set of views. The representation of the BRDF also allows the efficient generation of reflected directions for Monte Carlo array tracing applications. The method can be integrated into many different global illumination algorithms, including ray tracers and wavelet radiosity systems.

  7. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  8. Searching and exploitation of distributed geospatial data sources via the Naval Research Lab's Geospatial Information Database (GIDB) Portal System

    NASA Astrophysics Data System (ADS)

    McCreedy, Frank P.; Sample, John T.; Ladd, William P.; Thomas, Michael L.; Shaw, Kevin B.

    2005-05-01

    The Naval Research Laboratory"s Geospatial Information Database (GIDBTM) Portal System has been extended to now include an extensive geospatial search functionality. The GIDB Portal System interconnects over 600 distributed geospatial data sources via the Internet with a thick client, thin client and a PDA client. As the GIDB Portal System has rapidly grown over the last two years (adding hundreds of geospatial sources), the obvious requirement has arisen to more effectively mine the interconnected sources in near real-time. How the GIDB Search addresses this issue is the prime focus of this paper.

  9. Multi-Instance Metric Transfer Learning for Genome-Wide Protein Function Prediction.

    PubMed

    Xu, Yonghui; Min, Huaqing; Wu, Qingyao; Song, Hengjie; Ye, Bicui

    2017-02-06

    Multi-Instance (MI) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with multiple instances. Many studies in this literature attempted to find an appropriate Multi-Instance Learning (MIL) method for genome-wide protein function prediction under a usual assumption, the underlying distribution from testing data (target domain, i.e., TD) is the same as that from training data (source domain, i.e., SD). However, this assumption may be violated in real practice. To tackle this problem, in this paper, we propose a Multi-Instance Metric Transfer Learning (MIMTL) approach for genome-wide protein function prediction. In MIMTL, we first transfer the source domain distribution to the target domain distribution by utilizing the bag weights. Then, we construct a distance metric learning method with the reweighted bags. At last, we develop an alternative optimization scheme for MIMTL. Comprehensive experimental evidence on seven real-world organisms verifies the effectiveness and efficiency of the proposed MIMTL approach over several state-of-the-art methods.

  10. Properties of Noise Cross-Correlation Functions Obtained from a Distributed Acoustic Sensing Array at Garner Valley, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Xiangfang; Lancelle, Chelsea; Thurber, Clifford

    A field test that was conducted at Garner Valley, California, on 11 and 12 September 2013 using distributed acoustic sensing (DAS) to sense ground vibrations provided a continuous overnight record of ambient noise. Furthermore, the energy of ambient noise was concentrated between 5 and 25 Hz, which falls into the typical traffic noise frequency band. A standard procedure (Bensen et al., 2007) was adopted to calculate noise cross-correlation functions (NCFs) for 1-min intervals. The 1-min-long NCFs were stacked using the time–frequency domain phase-weighted-stacking method, which significantly improves signal quality. The obtained NCFs were asymmetrical, which was a result of themore » nonuniform distributed noise sources. A precursor appeared on NCFs along one segment, which was traced to a strong localized noise source or a scatterer at a nearby road intersection. NCF for the radial component of two surface accelerometers along a DAS profile gave similar results to those from DAS channels. Here, we calculated the phase velocity dispersion from DAS NCFs using the multichannel analysis of surface waves technique, and the result agrees with active-source results. We then conclude that ambient noise sources and the high spatial sampling of DAS can provide the same subsurface information as traditional active-source methods.« less

  11. Properties of Noise Cross-Correlation Functions Obtained from a Distributed Acoustic Sensing Array at Garner Valley, California

    DOE PAGES

    Zeng, Xiangfang; Lancelle, Chelsea; Thurber, Clifford; ...

    2017-01-31

    A field test that was conducted at Garner Valley, California, on 11 and 12 September 2013 using distributed acoustic sensing (DAS) to sense ground vibrations provided a continuous overnight record of ambient noise. Furthermore, the energy of ambient noise was concentrated between 5 and 25 Hz, which falls into the typical traffic noise frequency band. A standard procedure (Bensen et al., 2007) was adopted to calculate noise cross-correlation functions (NCFs) for 1-min intervals. The 1-min-long NCFs were stacked using the time–frequency domain phase-weighted-stacking method, which significantly improves signal quality. The obtained NCFs were asymmetrical, which was a result of themore » nonuniform distributed noise sources. A precursor appeared on NCFs along one segment, which was traced to a strong localized noise source or a scatterer at a nearby road intersection. NCF for the radial component of two surface accelerometers along a DAS profile gave similar results to those from DAS channels. Here, we calculated the phase velocity dispersion from DAS NCFs using the multichannel analysis of surface waves technique, and the result agrees with active-source results. We then conclude that ambient noise sources and the high spatial sampling of DAS can provide the same subsurface information as traditional active-source methods.« less

  12. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  13. An adaptive grid scheme using the boundary element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munipalli, R.; Anderson, D.A.

    1996-09-01

    A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less

  14. The Chandra Source Catalog: X-ray Aperture Photometry

    NASA Astrophysics Data System (ADS)

    Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. We describe here the method by which fluxes are measured for detected sources. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect. Source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The current implementation is however limited to non-informative priors. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it.

  15. Distribution of trace metals in the vicinity of a wastewater treatment plant on the Potomac River, Washington, DC, USA

    NASA Astrophysics Data System (ADS)

    Smith, J. P.; Muller, A. C.

    2013-05-01

    Predicting the fate and distribution of anthropogenic-sourced trace metals in riverine and estuarine systems is challenging due to multiple and varying source functions and dynamic physiochemical conditions. Between July 2011 and November 2012, sediment and water column samples were collected from over 20 sites in the tidal-fresh Potomac River estuary, Washington, DC near the outfall of the Blue Plains Advanced Wastewater Treatment Plant (BPWTP) for measurement of select trace metals. Field observations of water column parameters (conductivity, temperature, pH, turbidity) were also made at each sampling site. Trace metal concentrations were normalized to the "background" composition of the river determined from control sites in order to investigate the distribution BPWTP-sourced in local Potomac River receiving waters. Temporal differences in the observed distribution of trace metals were attributed to changes in the relative contribution of metals from different sources (wastewater, riverine, other) coupled with differences in the physiochemical conditions of the water column. Results show that normalizing near-source concentrations to the background composition of the water body and also to key environmental parameters can aid in predicting the fate and distribution of anthropogenic-sourced trace metals in dynamic riverine and estuarine systems like the tidal-fresh Potomac River.

  16. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  17. Tomographic gamma ray apparatus and method

    DOEpatents

    Anger, Hal O.

    1976-09-07

    This invention provides a radiation detecting apparatus for imaging the distribution of radioactive substances in a three-dimensional subject such as a medical patient. Radiating substances introduced into the subject are viewed by a radiation image detector that provides an image of the distribution of radiating sources within its field of view. By viewing the area of interest from two or more positions, as by scanning the detector over the area, the radiating sources seen by the detector have relative positions that are a function of their depth in the subject. The images seen by the detector are transformed into first output signals which are combined in a readout device with second output signals that indicate the position of the detector relative to the subject. The readout device adjusts the signals and provides multiple radiation distribution readouts of the subject, each readout comprising a sharply resolved picture that shows the distribution and intensity of radiating sources lying in a selected plane in the subject, while sources lying on other planes are blurred in that particular readout.

  18. A class of ejecta transport test problems

    NASA Astrophysics Data System (ADS)

    Oro, David M.; Hammerberg, J. E.; Buttler, William T.; Mariam, Fesseha G.; Morris, Christopher L.; Rousculp, Chris; Stone, Joseph B.

    2012-03-01

    Hydro code implementations of ejecta dynamics at shocked interfaces presume a source distribution function of particulate masses and velocities, f0(m,u;t). Some properties of this source distribution function have been determined from Taylor- and supported-shockwave experiments. Such experiments measure the mass moment of f0 under vacuum conditions assuming weak particle-particle interactions and, usually, fully inelastic scattering (capture) of ejecta particles from piezoelectric diagnostic probes. Recently, planar ejection of W particles into vacuum, Ar, and Xe gas atmospheres have been carried out to provide benchmark transport data for transport model development and validation. We present those experimental results and compare them with modeled transport of the W-ejecta particles in Ar and Xe.

  19. Analytic solution of the Spencer-Lewis angular-spatial moments equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippone, W.L.

    A closed-form solution for the angular-spatial moments of the Spencer-Lewis equation is presented that is valid for infinite homogeneous media. From the moments, the electron density distribution as a function of position and path length (energy) is reconstructed for several sample problems involving plane isotropic sources of electrons in aluminium. The results are in excellent agreement with those determined numerically using the streaming ray method. The primary use of the closed form solution will most likely be to generate accurate electron transport benchmark solutions. In principle, the electron density as a function of space, path length, and direction can bemore » determined for planar sources of arbitrary angular distribution.« less

  20. Inner Magnetospheric Superthermal Electron Transport: Photoelectron and Plasma Sheet Electron Sources

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Liemohn, M. W.; Kozyra, J. U.; Moore, T. E.

    1998-01-01

    Two time-dependent kinetic models of superthermal electron transport are combined to conduct global calculations of the nonthermal electron distribution function throughout the inner magnetosphere. It is shown that the energy range of validity for this combined model extends down to the superthermal-thermal intersection at a few eV, allowing for the calculation of the en- tire distribution function and thus an accurate heating rate to the thermal plasma. Because of the linearity of the formulas, the source terms are separated to calculate the distributions from the various populations, namely photoelectrons (PEs) and plasma sheet electrons (PSEs). These distributions are discussed in detail, examining the processes responsible for their formation in the various regions of the inner magnetosphere. It is shown that convection, corotation, and Coulomb collisions are the dominant processes in the formation of the PE distribution function and that PSEs are dominated by the interplay between the drift terms. Of note is that the PEs propagate around the nightside in a narrow channel at the edge of the plasmasphere as Coulomb collisions reduce the fluxes inside of this and convection compresses the flux tubes inward. These distributions are then recombined to show the development of the total superthermal electron distribution function in the inner magnetosphere and their influence on the thermal plasma. PEs usually dominate the dayside heating, with integral energy fluxes to the ionosphere reaching 10(exp 10) eV/sq cm/s in the plasmasphere, while heating from the PSEs typically does not exceed 10(exp 8) eV/sq cm/s. On the nightside, the inner plasmasphere is usually unheated by superthermal electrons. A feature of these combined spectra is that the distribution often has upward slopes with energy, particularly at the crossover from PE to PSE dominance, indicating that instabilities are possible.

  1. Harvesting implementation for the GI-cat distributed catalog

    NASA Astrophysics Data System (ADS)

    Boldrini, Enrico; Papeschi, Fabrizio; Bigagli, Lorenzo; Mazzetti, Paolo

    2010-05-01

    GI-cat framework implements a distributed catalog service supporting different international standards and interoperability arrangements in use by the geoscientific community. The distribution functionality in conjunction with the mediation functionality allows to seamlessly query remote heterogeneous data sources, including OGC Web Services - e.e. OGC CSW, WCS, WFS and WMS, community standards such as UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services and OpenSearch engines. In the GI-cat modular architecture a distributor component carry out the distribution functionality by query delegation to the mediator components (one for each different data source). Each of these mediator components is able to query a specific data source and convert back the results by mapping of the foreign data model to the GI-cat internal one, based on ISO 19139. In order to cope with deployment scenarios in which local data is expected, an harvesting approach has been experimented. The new strategy comes in addition to the consolidated distributed approach, allowing the user to switch between a remote and a local search at will for each federated resource; this extends GI-cat configuration possibilities. The harvesting strategy is designed in GI-cat by the use at the core of a local cache component, implemented as a native XML database and based on eXist. The different heterogeneous sources are queried for the bulk of available data; this data is then injected into the cache component after being converted to the GI-cat data model. The query and conversion steps are performed by the mediator components that were are part of the GI-cat framework. Afterward each new query can be exercised against local data that have been stored in the cache component. Considering both advantages and shortcomings that affect harvesting and query distribution approaches, it comes out that a user driven tuning is required to take the best of them. This is often related to the specific user scenarios to be implemented. GI-cat proved to be a flexible framework to address user need. The GI-cat configurator tool was updated to make such a tuning possible: each data source can be configured to enable either harvesting or query distribution approaches; in the former case an appropriate harvesting interval can be set.

  2. Empirical Green's functions from small earthquakes: A waveform study of locally recorded aftershocks of the 1971 San Fernando earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchings, L.; Wu, F.

    1990-02-10

    Seismograms from 52 aftershocks of the 1971 San Fernando earthquake recorded at 25 stations distributed across the San Fernando Valley are examined to identify empirical Green's functions, and characterize the dependence of their waveforms on moment, focal mechanism, source and recording site spatial variations, recording site geology, and recorded frequency band. Recording distances ranged from 3.0 to 33.0 km, hypocentral separations ranged from 0.22 to 28.4 km, and recording site separations ranged from 0.185 to 24.2 km. The recording site geologies are diorite gneiss, marine and nonmarine sediments, and alluvium of varying thicknesses. Waveforms of events with moment below aboutmore » 1.5 {times} 10{sup 21} dyn cm are independent of the source-time function and are termed empirical Green's functions. Waveforms recorded at a particular station from events located within 1.0 to 3.0 km of each other, depending upon site geology, with very similar focal mechanism solutions are nearly identical for frequencies up to 10 Hz. There is no correlation to waveforms between recording sites at least 1.2 km apart, and waveforms are clearly distinctive for two sites 0.185 km apart. The geologic conditions of the recording site dominate the character of empirical Green's functions. Even for source separations of up to 20.0 km, the empirical Green's functions at a particular site are consistent in frequency content, amplification, and energy distribution. Therefore, it is shown that empirical Green's functions can be used to obtain site response functions. The observations of empirical Green's functions are used as a basis for developing the theory for using empirical Green's functions in deconvolution for source pulses and synthesis of seismograms of larger earthquakes.« less

  3. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.

  4. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  5. INVERTING CASCADE IMPACTOR DATA FOR SIZE-RESOLVED CHARACTERIZATION OF FINE PARTICULATE SOURCE EMISSIONS

    EPA Science Inventory

    Cascade impactors are particularly useful in determining the mass size distributions of particulate and individual chemical species. The impactor raw data must be inverted to reconstruct a continuous particle size distribution. An inversion method using a lognormal function for p...

  6. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  7. Dust temperature distributions in star-forming condensations

    NASA Technical Reports Server (NTRS)

    Xie, Taoling; Goldsmith, Paul F.; Snell, Ronald L.; Zhou, Weimin

    1993-01-01

    The FIR spectra of the central IR condensations in the dense cores of molecular clouds AFGL 2591. B335, L1551, Mon R2, and Sgr B2 are reanalyzed here in terms of the distribution of dust mass as a function of temperature. FIR spectra of these objects can be characterized reasonably well by a given functional form. The general shapes of the dust temperature distributions of these objects are similar and closely resemble the theoretical computations of de Muizon and Rouan (1985) for a sample of 'hot centered' clouds with active star formation. Specifically, the model yields a 'cutoff' temperature below which essentially no dust is needed to interpret the dust emission spectra, and most of the dust mass is distributed in a broad temperature range of a few tens of degrees above the cutoff temperature. Mass, luminosity, average temperature, and column density are obtained, and it is found that the physical quantities differ considerably from source to source in a meaningful way.

  8. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    NASA Astrophysics Data System (ADS)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  9. Skin dose from radionuclide contamination on clothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, D.C.; Hussein, E.M.A.; Yuen, P.S.

    1997-06-01

    Skin dose due to radio nuclide contamination on clothing is calculated by Monte Carlo simulation of electron and photon radiation transport. Contamination due to a hot particle on some selected clothing geometries of cotton garment is simulated. The effect of backscattering in the surrounding air is taken into account. For each combination of source-clothing geometry, the dose distribution function in the skin, including the dose at tissue depths of 7 mg cm{sup -2} and 1,000 Mg cm{sup -2}, is calculated by simulating monoenergetic photon and electron sources. Skin dose due to contamination by a radionuclide is then determined by propermore » weighting of & monoenergetic dose distribution functions. The results are compared with the VARSKIN point-kernel code for some radionuclides, indicating that the latter code tends to under-estimate the dose for gamma and high energy beta sources while it overestimates skin dose for low energy beta sources. 13 refs., 4 figs., 2 tabs.« less

  10. Oxide vapor distribution from a high-frequency sweep e-beam system

    NASA Astrophysics Data System (ADS)

    Chow, R.; Tassano, P. L.; Tsujimoto, N.

    1995-03-01

    Oxide vapor distributions have been determined as a function of operating parameters of a high frequency sweep e-beam source combined with a programmable sweep controller. We will show which parameters are significant, the parameters that yield the broadest oxide deposition distribution, and the procedure used to arrive at these conclusions. A design-of-experimental strategy was used with five operating parameters: evaporation rate, sweep speed, sweep pattern (pre-programmed), phase speed (azimuthal rotation of the pattern), profile (dwell time as a function of radial position). A design was chosen that would show which of the parameters and parameter pairs have a statistically significant effect on the vapor distribution. Witness flats were placed symmetrically across a 25 inches diameter platen. The stationary platen was centered 24 inches above the e-gun crucible. An oxide material was evaporated under 27 different conditions. Thickness measurements were made with a stylus profilometer. The information will enable users of the high frequency e-gun systems to optimally locate the source in a vacuum system and understand which parameters have a major effect on the vapor distribution.

  11. Analytic solution of field distribution and demagnetization function of ideal hollow cylindrical field source

    NASA Astrophysics Data System (ADS)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-09-01

    The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.

  12. E-Standards For Mass Properties Engineering

    NASA Technical Reports Server (NTRS)

    Cerro, Jeffrey A.

    2008-01-01

    A proposal is put forth to promote the concept of a Society of Allied Weight Engineers developed voluntary consensus standard for mass properties engineering. This standard would be an e-standard, and would encompass data, data manipulation, and reporting functionality. The standard would be implemented via an open-source SAWE distribution site with full SAWE member body access. Engineering societies and global standards initiatives are progressing toward modern engineering standards, which become functioning deliverable data sets. These data sets, if properly standardized, will integrate easily between supplier and customer enabling technically precise mass properties data exchange. The concepts of object-oriented programming support all of these requirements, and the use of a JavaTx based open-source development initiative is proposed. Results are reported for activity sponsored by the NASA Langley Research Center Innovation Institute to scope out requirements for developing a mass properties engineering e-standard. An initial software distribution is proposed. Upon completion, an open-source application programming interface will be available to SAWE members for the development of more specific programming requirements that are tailored to company and project requirements. A fully functioning application programming interface will permit code extension via company proprietary techniques, as well as through continued open-source initiatives.

  13. Pickup Ion Distributions from Three Dimensional Neutral Exospheres

    NASA Technical Reports Server (NTRS)

    Hartle, R. E.; Sarantos, M.; Sittler, E. C., Jr.

    2011-01-01

    Pickup ions formed from ionized neutral exospheres in flowing plasmas have phase space distributions that reflect their source's spatial distributions. Phase space distributions of the ions are derived from the Vlasov equation with a delta function source using three.dimensional neutral exospheres. The ExB drift produced by plasma motion picks up the ions while the effects of magnetic field draping, mass loading, wave particle scattering, and Coulomb collisions near a planetary body are ignored. Previously, one.dimensional exospheres were treated, resulting in closed form pickup ion distributions that explicitly depend on the ratio rg/H, where rg is the ion gyroradius and H is the neutral scale height at the exobase. In general, the pickup ion distributions, based on three.dimensional neutral exospheres, cannot be written in closed form, but can be computed numerically. They continue to reflect their source's spatial distributions in an implicit way. These ion distributions and their moments are applied to several bodies, including He(+) and Na(+) at the Moon, H(+2) and CH(+4) at Titan, and H+ at Venus. The best places to use these distributions are upstream of the Moon's surface, the ionopause of Titan, and the bow shock of Venus.

  14. Diagnosing the Fine Structure of Electron Energy Within the ECRIT Ion Source

    NASA Astrophysics Data System (ADS)

    Jin, Yizhou; Yang, Juan; Tang, Mingjie; Luo, Litao; Feng, Bingbing

    2016-07-01

    The ion source of the electron cyclotron resonance ion thruster (ECRIT) extracts ions from its ECR plasma to generate thrust, and has the property of low gas consumption (2 sccm, standard-state cubic centimeter per minute) and high durability. Due to the indispensable effects of the primary electron in gas discharge, it is important to experimentally clarify the electron energy structure within the ion source of the ECRIT through analyzing the electron energy distribution function (EEDF) of the plasma inside the thruster. In this article the Langmuir probe diagnosing method was used to diagnose the EEDF, from which the effective electron temperature, plasma density and the electron energy probability function (EEPF) were deduced. The experimental results show that the magnetic field influences the curves of EEDF and EEPF and make the effective plasma parameter nonuniform. The diagnosed electron temperature and density from sample points increased from 4 eV/2×1016 m-3 to 10 eV/4×1016 m-3 with increasing distances from both the axis and the screen grid of the ion source. Electron temperature and density peaking near the wall coincided with the discharge process. However, a double Maxwellian electron distribution was unexpectedly observed at the position near the axis of the ion source and about 30 mm from the screen grid. Besides, the double Maxwellian electron distribution was more likely to emerge at high power and a low gas flow rate. These phenomena were believed to relate to the arrangements of the gas inlets and the magnetic field where the double Maxwellian electron distribution exits. The results of this research may enhance the understanding of the plasma generation process in the ion source of this type and help to improve its performance. supported by National Natural Science Foundation of China (No. 11475137)

  15. Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture

    NASA Technical Reports Server (NTRS)

    Fiene, Bruce F.

    1994-01-01

    The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.

  16. Electron Energy Distribution function in a weakly magnetized expanding helicon plasma discharge

    NASA Astrophysics Data System (ADS)

    Sirse, Nishant; Harvey, Cleo; Gaman, Cezar; Ellingboe, Bert

    2016-09-01

    Helicon wave heating is well known to produce high-density plasma source for application in plasma thrusters, plasma processing and many more. Our previous study (B Ellingboe et al. APS Gaseous Electronics Conference 2015, abstract #KW2.005) has shown observation of helicon wave in a weakly magnetized inductively coupled plasma source excited by m =0 antenna at 13.56 MHz. In this paper, we investigated the Electron Energy Distribution Function (EEDF) in the same setup by using an RF compensated Langmuir probe. The ac signal superimposition technique (second harmonic technique) is used to determine EEDF. The EEDF is measured for 5-100 mTorr gas pressure, 100 W - 1.5 kW rf power and at different locations in the source chamber, boundary and diffusion chamber. This paper will discuss the change in the shape of EEDF for various heating mode transitions.

  17. Description of the SSF PMAD DC testbed control system data acquisition function

    NASA Technical Reports Server (NTRS)

    Baez, Anastacio N.; Mackin, Michael; Wright, Theodore

    1992-01-01

    The NASA LeRC in Cleveland, Ohio has completed the development and integration of a Power Management and Distribution (PMAD) DC Testbed. This testbed is a reduced scale representation of the end to end, sources to loads, Space Station Freedom Electrical Power System (SSF EPS). This unique facility is being used to demonstrate DC power generation and distribution, power management and control, and system operation techniques considered to be prime candidates for the Space Station Freedom. A key capability of the testbed is its ability to be configured to address system level issues in support of critical SSF program design milestones. Electrical power system control and operation issues like source control, source regulation, system fault protection, end-to-end system stability, health monitoring, resource allocation, and resource management are being evaluated in the testbed. The SSF EPS control functional allocation between on-board computers and ground based systems is evolving. Initially, ground based systems will perform the bulk of power system control and operation. The EPS control system is required to continuously monitor and determine the current state of the power system. The DC Testbed Control System consists of standard controllers arranged in a hierarchical and distributed architecture. These controllers provide all the monitoring and control functions for the DC Testbed Electrical Power System. Higher level controllers include the Power Management Controller, Load Management Controller, Operator Interface System, and a network of computer systems that perform some of the SSF Ground based Control Center Operation. The lower level controllers include Main Bus Switch Controllers and Photovoltaic Controllers. Power system status information is periodically provided to the higher level controllers to perform system control and operation. The data acquisition function of the control system is distributed among the various levels of the hierarchy. Data requirements are dictated by the control system algorithms being implemented at each level. A functional description of the various levels of the testbed control system architecture, the data acquisition function, and the status of its implementationis presented.

  18. A General Formulation of the Source Confusion Statistics and Application to Infrared Galaxy Surveys

    NASA Astrophysics Data System (ADS)

    Takeuchi, Tsutomu T.; Ishii, Takako T.

    2004-03-01

    Source confusion has been a long-standing problem in the astronomical history. In the previous formulation of the confusion problem, sources are assumed to be distributed homogeneously on the sky. This fundamental assumption is, however, not realistic in many applications. In this work, by making use of the point field theory, we derive general analytic formulae for the confusion problems with arbitrary distribution and correlation functions. As a typical example, we apply these new formulae to the source confusion of infrared galaxies. We first calculate the confusion statistics for power-law galaxy number counts as a test case. When the slope of differential number counts, γ, is steep, the confusion limits become much brighter and the probability distribution function (PDF) of the fluctuation field is strongly distorted. Then we estimate the PDF and confusion limits based on the realistic number count model for infrared galaxies. The gradual flattening of the slope of the source counts makes the clustering effect rather mild. Clustering effects result in an increase of the limiting flux density with ~10%. In this case, the peak probability of the PDF decreases up to ~15% and its tail becomes heavier. Although the effects are relatively small, they will be strong enough to affect the estimation of galaxy evolution from number count or fluctuation statistics. We also comment on future submillimeter observations.

  19. A decentralized mechanism for improving the functional robustness of distribution networks.

    PubMed

    Shi, Benyun; Liu, Jiming

    2012-10-01

    Most real-world distribution systems can be modeled as distribution networks, where a commodity can flow from source nodes to sink nodes through junction nodes. One of the fundamental characteristics of distribution networks is the functional robustness, which reflects the ability of maintaining its function in the face of internal or external disruptions. In view of the fact that most distribution networks do not have any centralized control mechanisms, we consider the problem of how to improve the functional robustness in a decentralized way. To achieve this goal, we study two important problems: 1) how to formally measure the functional robustness, and 2) how to improve the functional robustness of a network based on the local interaction of its nodes. First, we derive a utility function in terms of network entropy to characterize the functional robustness of a distribution network. Second, we propose a decentralized network pricing mechanism, where each node need only communicate with its distribution neighbors by sending a "price" signal to its upstream neighbors and receiving "price" signals from its downstream neighbors. By doing so, each node can determine its outflows by maximizing its own payoff function. Our mathematical analysis shows that the decentralized pricing mechanism can produce results equivalent to those of an ideal centralized maximization with complete information. Finally, to demonstrate the properties of our mechanism, we carry out a case study on the U.S. natural gas distribution network. The results validate the convergence and effectiveness of our mechanism when comparing it with an existing algorithm.

  20. Green's function of radial inhomogeneous spheres excited by internal sources.

    PubMed

    Zouros, Grigorios P; Kokkorakis, Gerassimos C

    2011-01-01

    Green's function in the interior of penetrable bodies with inhomogeneous compressibility by sources placed inside them is evaluated through a Schwinger-Lippmann volume integral equation. In the case of a radial inhomogeneous sphere, the radial part of the unknown Green's function can be expanded in a double Dini's series, which allows analytical evaluation of the involved cumbersome integrals. The simple case treated here can be extended to more difficult situations involving inhomogeneous density as well as to the corresponding electromagnetic or elastic problem. Finally, numerical results are given for various inhomogeneous compressibility distributions.

  1. Intensity distribution of the x ray source for the AXAF VETA-I mirror test

    NASA Technical Reports Server (NTRS)

    Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. Ann

    1992-01-01

    The X-ray generator for the AXAF VETA-I mirror test is an electron impact X-ray source with various anode materials. The source sizes of different anodes and their intensity distributions were measured with a pinhole camera before the VETA-I test. The pinhole camera consists of a 30 micrometers diameter pinhole for imaging the source and a Microchannel Plate Imaging Detector with 25 micrometers FWHM spatial resolution for detecting and recording the image. The camera has a magnification factor of 8.79, which enables measuring the detailed spatial structure of the source. The spot size, the intensity distribution, and the flux level of each source were measured with different operating parameters. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are clearly shown in the pictures. They are compared and agree with the results from the pinhole camera measurements. This paper presents the results of the above measurements. The results show that under operating conditions characteristic of the VETA-I test, all the source sizes have a FWHM of less than 0.45 mm. For a source of this size at 528 meters away, the angular size to VETA is less than 0.17 arcsec which is small compared to the on ground VETA angular resolution (0.5 arcsec, required and 0.22 arcsec, measured). Even so, the results show the intensity distributions of the sources have complicated structures. These results were crucial for the VETA data analysis and for obtaining the on ground and predicted in orbit VETA Point Response Function.

  2. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2011-06-01

    Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.

  3. Separation of the low-frequency atmospheric variability into non-Gaussian multidimensional sources by Independent Subspace Analysis

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Ribeiro, Andreia

    2016-04-01

    An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.

  4. Theoretical and experimental studies of a planar inductive coupled rf plasma source as the driver in simulator facility (ISTAPHM) of interactions of waves with the edge plasma on tokamaks

    NASA Astrophysics Data System (ADS)

    Ghanei, V.; Nasrabadi, M. N.; Chin, O.-H.; Jayapalan, K. K.

    2017-11-01

    This research aims to design and build a planar inductive coupled RF plasma source device which is the driver of the simulator project (ISTAPHM) of the interactions between ICRF Antenna and Plasma on tokamak by using the AMPICP model. For this purpose, a theoretical derivation of the distribution of the RF magnetic field in the plasma-filled reactor chamber is presented. An experimental investigation of the field distributions is described and Langmuir measurements are developed numerically. A comparison of theory and experiment provides an evaluation of plasma parameters in the planar ICP reactor. The objective of this study is to characterize the plasma produced by the source alone. We present the results of the first analysis of the plasma characteristics (plasma density, electron temperature, electron-ion collision frequency, particle fluxes and their velocities, stochastic frequency, skin depth and electron energy distribution functions) as function of the operating parameters (injected power, neutral pressure and magnetic field) as measured with fixed and movable Langmuir probes. The plasma is currently produced only by the planar ICP. The exact goal of these experiments is that the produced plasma by external source can exist as a plasma representative of the edge of tokamaks.

  5. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  6. Transport and solubility of Hetero-disperse dry deposition particulate matter subject to urban source area rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Ying, G.; Sansalone, J.

    2010-03-01

    SummaryWith respect to hydrologic processes, the impervious pavement interface significantly alters relationships between rainfall and runoff. Commensurate with alteration of hydrologic processes the pavement also facilitates transport and solubility of dry deposition particulate matter (PM) in runoff. This study examines dry depositional flux rates, granulometric modification by runoff transport, as well as generation of total dissolved solids (TDS), alkalinity and conductivity in source area runoff resulting from PM solubility. PM is collected from a paved source area transportation corridor (I-10) in Baton Rouge, Louisiana encompassing 17 dry deposition and 8 runoff events. The mass-based granulometric particle size distribution (PSD) is measured and modeled through a cumulative gamma function, while PM surface area distributions across the PSD follow a log-normal distribution. Dry deposition flux rates are modeled as separate first-order exponential functions of previous dry hours (PDH) for PM and suspended, settleable and sediment fractions. When trans-located from dry deposition into runoff, PSDs are modified, with a d50m decreasing from 331 to 14 μm after transport and 60 min of settling. Solubility experiments as a function of pH, contact time and particle size using source area rainfall generate constitutive models to reproduce pH, alkalinity, TDS and alkalinity for historical events. Equilibrium pH, alkalinity and TDS are strongly influenced by particle size and contact times. The constitutive leaching models are combined with measured PSDs from a series of rainfall-runoff events to demonstrate that the model results replicate alkalinity and TDS in runoff from the subject watershed. Results illustrate the granulometry of dry deposition PM, modification of PSDs along the drainage pathway, and the role of PM solubility for generation of TDS, alkalinity and conductivity in urban source area rainfall-runoff.

  7. Electrophysiological Source Imaging: A Noninvasive Window to Brain Dynamics.

    PubMed

    He, Bin; Sohrabpour, Abbas; Brown, Emery; Liu, Zhongming

    2018-06-04

    Brain activity and connectivity are distributed in the three-dimensional space and evolve in time. It is important to image brain dynamics with high spatial and temporal resolution. Electroencephalography (EEG) and magnetoencephalography (MEG) are noninvasive measurements associated with complex neural activations and interactions that encode brain functions. Electrophysiological source imaging estimates the underlying brain electrical sources from EEG and MEG measurements. It offers increasingly improved spatial resolution and intrinsically high temporal resolution for imaging large-scale brain activity and connectivity on a wide range of timescales. Integration of electrophysiological source imaging and functional magnetic resonance imaging could further enhance spatiotemporal resolution and specificity to an extent that is not attainable with either technique alone. We review methodological developments in electrophysiological source imaging over the past three decades and envision its future advancement into a powerful functional neuroimaging technology for basic and clinical neuroscience applications.

  8. The Chandra Source Catalog: X-ray Aperture Photometry

    NASA Astrophysics Data System (ADS)

    Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect, and source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it. We also determine limiting sensitivities at arbitrary locations in the field using the same formulation. This work was supported by CXC NASA contracts NAS8-39073 (VK) and NAS8-03060 (CSC).

  9. Lensing corrections to features in the angular two-point correlation function and power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LoVerde, Marilena; Department of Physics, Columbia University, New York, New York 10027; Hui, Lam

    2008-01-15

    It is well known that magnification bias, the modulation of galaxy or quasar source counts by gravitational lensing, can change the observed angular correlation function. We investigate magnification-induced changes to the shape of the observed correlation function w({theta}), and the angular power spectrum C{sub l}, paying special attention to the matter-radiation equality peak and the baryon wiggles. Lensing effectively mixes the correlation function of the source galaxies with that of the matter correlation at the lower redshifts of the lenses distorting the observed correlation function. We quantify how the lensing corrections depend on the width of the selection function, themore » galaxy bias b, and the number count slope s. The lensing correction increases with redshift and larger corrections are present for sources with steep number count slopes and/or broad redshift distributions. The most drastic changes to C{sub l} occur for measurements at high redshifts (z > or approx. 1.5) and low multipole moment (l < or approx. 100). For the source distributions we consider, magnification bias can shift the location of the matter-radiation equality scale by 1%-6% at z{approx}1.5 and by z{approx}3.5 the shift can be as large as 30%. The baryon bump in {theta}{sup 2}w({theta}) is shifted by < or approx. 1% and the width is typically increased by {approx}10%. Shifts of > or approx. 0.5% and broadening > or approx. 20% occur only for very broad selection functions and/or galaxies with (5s-2)/b > or approx. 2. However, near the baryon bump the magnification correction is not constant but is a gently varying function which depends on the source population. Depending on how the w({theta}) data is fitted, this correction may need to be accounted for when using the baryon acoustic scale for precision cosmology.« less

  10. The effect of the charge exchange source on the velocity and 'temperature' distributions and their anisotropies in the earth's exosphere

    NASA Technical Reports Server (NTRS)

    Hodges, R. R., Jr.; Rohrbaugh, R. P.; Tinsley, B. A.

    1981-01-01

    The velocity distribution of atomic hydrogen in the earth's exosphere is calculated as a function of altitude and direction taking into account both the classic exobase source and the higher-altitude plasmaspheric charge exchange source. Calculations are performed on the basis of a Monte Carlo technique in which random ballistic trajectories of individual atoms are traced through a three-dimensional grid of audit zones, at which relative concentrations and momentum or energy fluxes are obtained. In the case of the classical exobase source alone, the slope of the velocity distribution is constant only for the upward radial velocity component and increases dramatically with altitude for the incoming radial and transverse velocity components, resulting in a temperature decrease. The charge exchange source, which produces the satellite hydrogen component and the hot ballistic and escape components of the exosphere, is found to enhance the wings of the velocity distributions, however this effect is not sufficient to overcome the temperature decreases at altitudes above one earth radius. The resulting global model of the hydrogen exosphere may be used as a realistic basis for radiative transfer calculations.

  11. Functional Brain Networks Are Dominated by Stable Group and Individual Factors, Not Cognitive or Daily Variation.

    PubMed

    Gratton, Caterina; Laumann, Timothy O; Nielsen, Ashley N; Greene, Deanna J; Gordon, Evan M; Gilmore, Adrian W; Nelson, Steven M; Coalson, Rebecca S; Snyder, Abraham Z; Schlaggar, Bradley L; Dosenbach, Nico U F; Petersen, Steven E

    2018-04-18

    The organization of human brain networks can be measured by capturing correlated brain activity with fMRI. There is considerable interest in understanding how brain networks vary across individuals or neuropsychiatric populations or are altered during the performance of specific behaviors. However, the plausibility and validity of such measurements is dependent on the extent to which functional networks are stable over time or are state dependent. We analyzed data from nine high-quality, highly sampled individuals to parse the magnitude and anatomical distribution of network variability across subjects, sessions, and tasks. Critically, we find that functional networks are dominated by common organizational principles and stable individual features, with substantially more modest contributions from task-state and day-to-day variability. Sources of variation were differentially distributed across the brain and differentially linked to intrinsic and task-evoked sources. We conclude that functional networks are suited to measuring stable individual characteristics, suggesting utility in personalized medicine. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  13. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  14. Discontinuous model with semi analytical sheath interface for radio frequency plasma

    NASA Astrophysics Data System (ADS)

    Miyashita, Masaru

    2016-09-01

    Sumitomo Heavy Industries, Ltd. provide many products utilizing plasma. In this study, we focus on the Radio Frequency (RF) plasma source by interior antenna. The plasma source is expected to be high density and low metal contamination. However, the sputtering the antenna cover by high energy ion from sheath voltage still have been problematic. We have developed the new model which can calculate sheath voltage wave form in the RF plasma source for realistic calculation time. This model is discontinuous that electronic fluid equation in plasma connect to usual passion equation in antenna cover and chamber with semi analytical sheath interface. We estimate the sputtering distribution based on calculated sheath voltage waveform by this model, sputtering yield and ion energy distribution function (IEDF) model. The estimated sputtering distribution reproduce the tendency of experimental results.

  15. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Tan, Swie-In [San Jose, CA; Reiss, Ira [New City, NY

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  16. [Applications of GIS in biomass energy source research].

    PubMed

    Su, Xian-Ming; Wang, Wu-Kui; Li, Yi-Wei; Sun, Wen-Xiang; Shi, Hai; Zhang, Da-Hong

    2010-03-01

    Biomass resources have the characteristics of widespread and dispersed distribution, which have close relations to the environment, climate, soil, and land use, etc. Geographic information system (GIS) has the functions of spatial analysis and the flexibility of integrating with other application models and algorithms, being of predominance to the biomass energy source research. This paper summarized the researches on the GIS applications in biomass energy source research, with the focus in the feasibility study of bioenergy development, assessment of biomass resources amount and distribution, layout of biomass exploitation and utilization, evaluation of gaseous emission from biomass burning, and biomass energy information system. Three perspectives of GIS applications in biomass energy source research were proposed, i. e., to enrich the data source, to improve the capacity on data processing and decision-support, and to generate the online proposal.

  17. Observations of a free-energy source for intense electrostatic waves. [in upper atmosphere near upper hybrid resonance frequency

    NASA Technical Reports Server (NTRS)

    Kurth, W. S.; Frank, L. A.; Gurnett, D. A.; Burek, B. G.; Ashour-Abdalla, M.

    1980-01-01

    Significant progress has been made in understanding intense electrostatic waves near the upper hybrid resonance frequency in terms of the theory of multiharmonic cyclotron emission using a classical loss-cone distribution function as a model. Recent observations by Hawkeye 1 and GEOS 1 have verified the existence of loss-cone distributions in association with the intense electrostatic wave events, however, other observations by Hawkeye and ISEE have indicated that loss cones are not always observable during the wave events, and in fact other forms of free energy may also be responsible for the instability. Now, for the first time, a positively sloped feature in the perpendicular distribution function has been uniquely identified with intense electrostatic wave activity. Correspondingly, we suggest that the theory is flexible under substantial modifications of the model distribution function.

  18. Connecting source aggregating areas with distributive regions via Optimal Transportation theory.

    NASA Astrophysics Data System (ADS)

    Lanzoni, S.; Putti, M.

    2016-12-01

    We study the application of Optimal Transport (OT) theory to the transfer of water and sediments from a distributed aggregating source to a distributing area connected by a erodible hillslope. Starting from the Monge-Kantorovich equations, We derive a global energy functional that nonlinearly combines the cost of constructing the drainage network over the entire domain and the cost of water and sediment transportation through the network. It can be shown that the minimization of this functional is equivalent to the infinite time solution of a system of diffusion partial differential equations coupled with transient ordinary differential equations, that closely resemble the classical conservation laws of water and sediments mass and momentum. We present several numerical simulations applied to realstic test cases. For example, the solution of the proposed model forms network configurations that share strong similiratities with rill channels formed on an hillslope. At a larger scale, we obtain promising results in simulating the network patterns that ensure a progressive and continuous transition from a drainage drainage area to a distributive receiving region.

  19. Fieldable computer system for determining gamma-ray pulse-height distributions, flux spectra, and dose rates from Little Boy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, C.E.; Lucas, M.C.; Tisinger, E.W.

    1984-01-01

    Our system consists of a LeCroy 3500 data acquisition system with a built-in CAMAC crate and eight bismuth-germanate detectors 7.62 cm in diameter and 7.62 cm long. Gamma-ray pulse-height distributions are acquired simultaneously for up to eight positions. The system was very carefully calibrated and characterized from 0.1 to 8.3 MeV using gamma-ray spectra from a variety of radioactive sources. By fitting the pulse-height distributions from the sources with a function containing 17 parameters, we determined theoretical repsonse functions. We use these response functions to unfold the distributions to obtain flux spectra. A flux-to-dose-rate conversion curve based on the workmore » of Dimbylow and Francis is then used to obtain dose rates. Direct use of measured spectra and flux-to-dose-rate curves to obtain dose rates avoids the errors that can arise from spectrum dependence in simple gamma-ray dosimeter instruments. We present some gamma-ray doses for the Little Boy assembly operated at low power. These results can be used to determine the exposures of the Hiroshima survivors and thus aid in the establishment of radation exposure limits for the nuclear industry.« less

  20. Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model.

    PubMed

    Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S

    2003-10-01

    Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.

  1. Automated Power Systems Management (APSM)

    NASA Technical Reports Server (NTRS)

    Bridgeforth, A. O.

    1981-01-01

    A breadboard power system incorporating autonomous functions of monitoring, fault detection and recovery, command and control was developed, tested and evaluated to demonstrate technology feasibility. Autonomous functions including switching of redundant power processing elements, individual load fault removal, and battery charge/discharge control were implemented by means of a distributed microcomputer system within the power subsystem. Three local microcomputers provide the monitoring, control and command function interfaces between the central power subsystem microcomputer and the power sources, power processing and power distribution elements. The central microcomputer is the interface between the local microcomputers and the spacecraft central computer or ground test equipment.

  2. THE ENVIRONMENT AND DISTRIBUTION OF EMITTING ELECTRONS AS A FUNCTION OF SOURCE ACTIVITY IN MARKARIAN 421

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo

    2011-05-20

    For the high-frequency-peaked BL Lac object Mrk 421, we study the variation of the spectral energy distribution (SED) as a function of source activity, from quiescent to active. We use a fully automatized {chi}{sup 2}-minimization procedure, instead of the 'eyeball' procedure more commonly used in the literature, to model nine SED data sets with a one-zone synchrotron self-Compton (SSC) model and examine how the model parameters vary with source activity. The latter issue can finally be addressed now, because simultaneous broadband SEDs (spanning from optical to very high energy photon) have finally become available. Our results suggest that in Mrkmore » 421 the magnetic field (B) decreases with source activity, whereas the electron spectrum's break energy ({gamma}{sub br}) and the Doppler factor ({delta}) increase-the other SSC parameters turn out to be uncorrelated with source activity. In the SSC framework, these results are interpreted in a picture where the synchrotron power and peak frequency remain constant with varying source activity, through a combination of decreasing magnetic field and increasing number density of {gamma} {<=} {gamma}{sub br} electrons: since this leads to an increased electron-photon scattering efficiency, the resulting Compton power increases, and so does the total (= synchrotron plus Compton) emission.« less

  3. "WWW.MDTF.ORG": a World Wide Web forum for developing open-architecture, freely distributed, digital teaching file software by participant consensus.

    PubMed

    Katzman, G L; Morris, D; Lauman, J; Cochella, C; Goede, P; Harnsberger, H R

    2001-06-01

    To foster a community supported evaluation processes for open-source digital teaching file (DTF) development and maintenance. The mechanisms used to support this process will include standard web browsers, web servers, forum software, and custom additions to the forum software to potentially enable a mediated voting protocol. The web server will also serve as a focal point for beta and release software distribution, which is the desired end-goal of this process. We foresee that www.mdtf.org will provide for widespread distribution of open source DTF software that will include function and interface design decisions from community participation on the website forums.

  4. Influence of the spectral power distribution of a LED on the illuminance responsivity of a photometer

    NASA Astrophysics Data System (ADS)

    Sametoglu, Ferhat

    2008-09-01

    The measurement accuracy in the photometric quantities measured through photometer head is determined by the value of the spectral mismatch correction factor ( c( St, Ss)), which is defined as a function of spectral power distribution of light sources, besides illuminance responsivity of the photometer head used. This factor is more important when photometric quantities of the light-emitting diode (LED) style optical sources, which radiate within relatively narrow spectral bands as compared with that of other optical sources, are being measured. Variations of the illuminance responsivities of various V( λ)-adopted photometer heads are discussed. High-power-colored LEDs, manufactured by Lumileds Lighting Co., were used as light sources and their relative spectral power distributions (RSPDs) were measured using a spectrometer-based optical setup. Dependences of the c( St, Ss) factors of three types of photometer heads ( f1'=1.4%, f1'=0.8% and f1'=0.5%) with wavelength and influences of the factors on the illuminance responsivities of photometer heads are presented.

  5. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Soumya D.; Nayak, Rajesh K.

    The space based gravitational wave detector LISA (Laser Interferometer Space Antenna) is expected to observe a large population of Galactic white dwarf binaries whose collective signal is likely to dominate instrumental noise at observational frequencies in the range 10{sup -4} to 10{sup -3} Hz. The motion of LISA modulates the signal of each binary in both frequency and amplitude--the exact modulation depending on the source direction and frequency. Starting with the observed response of one LISA interferometer and assuming only Doppler modulation due to the orbital motion of LISA, we show how the distribution of the entire binary population inmore » frequency and sky position can be reconstructed using a tomographic approach. The method is linear and the reconstruction of a delta-function distribution, corresponding to an isolated binary, yields a point spread function (psf). An arbitrary distribution and its reconstruction are related via smoothing with this psf. Exploratory results are reported demonstrating the recovery of binary sources, in the presence of white Gaussian noise.« less

  7. NOTE: Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool

    NASA Astrophysics Data System (ADS)

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  8. Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool.

    PubMed

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-07

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  9. A new treatment planning formalism for catheter-based beta sources used in intravascular brachytherapy.

    PubMed

    Patel, N S; Chiu-Tsao, S T; Tsao, H S; Harrison, L B

    2001-01-01

    Intravascular brachytherapy (IVBT) is an emerging modality for the treatment of atherosclerotic lesions in the artery. As part of the refinement in this rapidly evolving modality of treatment, the current simplistic dosimetry approach based on a fixed-point prescription must be challenged by future rigorous dosimetry method employing image-based three-dimensional (3D) treatment planning. The goals of 3D IVBT treatment planning calculations include (1) achieving high accuracy in a slim cylindrical region of interest, (2) accounting for the edge effect around the source ends, and (3) supporting multiple dwell positions. The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for gamma sources, as well as short beta sources with lengths less than twice the beta particle range. However, for the elongated beta sources and/or seed trains with lengths greater than twice the beta range, a new formalism is required to handle their distinctly different dose characteristics. Specifically, these characteristics consist of (a) flat isodose curves in the central region, (b) steep dose gradient at the source ends, and (c) exponential dose fall-off in the radial direction. In this paper, we present a novel formalism that evolved from TG-60 in maintaining the dose rate as a product of four key quantities. We propose to employ cylindrical coordinates (R, Z, phi), which are more natural and suitable to the slim cylindrical shape of the volume of interest, as opposed to the spherical coordinate system (r, theta, phi) used in the TG-60 formalism. The four quantities used in this formalism include (1) the distribution factor, H(R, Z), (2) the modulation function, M(R, Z), (3) the transverse dose function, h(R), and (4) the reference dose rate at 2 mm along the perpendicular bisector, D(R0=2 mm, Z0=0). The first three are counterparts of the geometry factor, the anisotropy function and the radial dose function in the TG-60 formalism, respectively. The reference dose rate is identical to that recommended by TG-60. The distribution factor is intended to resemble the dose profile due to the spatial distribution of activity in the elongated beta source, and it is a modified Fermi-Dirac function in mathematical form. The utility of this formalism also includes the slow-varying nature of the modulation function, allowing for more accurate treatment planning calculations based on interpolation. The transverse dose function describes the exponential fall-off of the dose in the radial direction, and an exponential or a polynomial can fit it. Simultaneously, the decoupling nature of these dose-related quantities facilitates image-based 3D treatment planning calculations for long beta sources used in IVBT. The new formalism also supports the dosimetry involving multiple dwell positions required for lesions longer than the source length. An example of the utilization of this formalism is illustrated for a 90Y coil source in a carbon dioxide-filled balloon. The pertinent dosimetric parameters were generated and tabulated for future use.

  10. Study of dust particle charging in weakly ionized inert gases taking into account the nonlocality of the electron energy distribution function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippov, A. V., E-mail: fav@triniti.ru; Dyatko, N. A.; Kostenko, A. S.

    2014-11-15

    The charging of dust particles in weakly ionized inert gases at atmospheric pressure has been investigated. The conditions under which the gas is ionized by an external source, a beam of fast electrons, are considered. The electron energy distribution function in argon, krypton, and xenon has been calculated for three rates of gas ionization by fast electrons: 10{sup 13}, 10{sup 14}, and 10{sup 15} cm{sup −1}. A model of dust particle charging with allowance for the nonlocal formation of the electron energy distribution function in the region of strong plasma quasi-neutrality violation around the dust particle is described. The nonlocalitymore » is taken into account in an approximation where the distribution function is a function of only the total electron energy. Comparative calculations of the dust particle charge with and without allowance for the nonlocality of the electron energy distribution function have been performed. Allowance for the nonlocality is shown to lead to a noticeable increase in the dust particle charge due to the influence of the group of hot electrons from the tail of the distribution function. It has been established that the screening constant virtually coincides with the smallest screening constant determined according to the asymptotic theory of screening with the electron transport and recombination coefficients in an unperturbed plasma.« less

  11. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  12. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  13. Resources | Division of Cancer Prevention

    Cancer.gov

    Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |

  14. The pressure distribution for biharmonic transmitting array: theoretical study

    NASA Astrophysics Data System (ADS)

    Baranowska, A.

    2005-03-01

    The aim of the paper is theoretical analysis of the finite amplitude waves interaction problem for the biharmonic transmitting array. We assume that the array consists of 16 circular pistons of the same dimensions that regrouped in two sections. Two different arrangements of radiating elements were considered. In this situation the radiating surface is non-continuous without axial symmetry. The mathematical model was built on the basis of the Khokhlov - Zabolotskaya - Kuznetsov (KZK) equation. To solve the problem the finite-difference method was applied. On-axis pressure amplitude for different frequency waves as a function of distance from the source, transverse pressure distribution of these waves at fixed distances from the source and pressure amplitude distribution for them at fixed planes were examined. Especially changes of normalized pressure amplitude for difference frequency were studied. The paper presents mathematical model and some results of theoretical investigations obtained for different values of source parameters.

  15. Calculated occultation profiles of Io and the hot spots

    NASA Technical Reports Server (NTRS)

    Mcewen, A. S.; Soderblom, L. A.; Matson, D. L.; Johnson, T. V.; Lunine, J. I.

    1986-01-01

    Occultations of Io by other Galilean satellites in 1985 provide a means to locate volcanic hot spots and to model their temperatures. The expected time variations in the integral reflected and emitted radiation of the occultations are computed as a function of wavelength (visual to 8.7 microns). The best current ephemerides were used to calculate the geometry of each event as viewed from earth. Visual reflectances were modeled from global mosaics of Io. Thermal emission from the hot spots was calculated from Voyager 1 IRIS observations and, for regions unobserved by IRIS, from a model based on the distribution of low-albedo features. The occultations may help determine (1) the location and temperature distribution of Loki; (2) the source(s) of excess emission in the region from long 50 deg to 200 deg and (3) the distribution of small, high-temperature sources.

  16. Description of small-scale fluctuations in the diffuse X-ray background.

    NASA Technical Reports Server (NTRS)

    Cavaliere, A.; Friedland, A.; Gursky, H.; Spada, G.

    1973-01-01

    An analytical study of the fluctuations on a small angular scale expected in the diffuse X-ray background in the presence of unresolved sources is presented. The source population is described by a function N(S), giving the number of sources per unit solid angle and unit apparent flux S. The distribution of observed flux, s, in each angular resolution element of a complete sky survey is represented by a function Q(s). The analytical relation between the successive, higher-order moments of N(S) and Q(s) is described. The goal of reconstructing the source population from the study of the moments of Q(s) of order higher than the second (i.e., the rms fluctuations) is discussed.

  17. Probe measurements of the electron velocity distribution function in beams: Low-voltage beam discharge in helium

    NASA Astrophysics Data System (ADS)

    Sukhomlinov, V.; Mustafaev, A.; Timofeev, N.

    2018-04-01

    Previously developed methods based on the single-sided probe technique are altered and applied to measure the anisotropic angular spread and narrow energy distribution functions of charged particle (electron and ion) beams. The conventional method is not suitable for some configurations, such as low-voltage beam discharges, electron beams accelerated in near-wall and near-electrode layers, and vacuum electron beam sources. To determine the range of applicability of the proposed method, simple algebraic relationships between the charged particle energies and their angular distribution are obtained. The method is verified for the case of the collisionless mode of a low-voltage He beam discharge, where the traditional method for finding the electron distribution function with the help of a Legendre polynomial expansion is not applicable. This leads to the development of a physical model of the formation of the electron distribution function in a collisionless low-voltage He beam discharge. The results of a numerical calculation based on Monte Carlo simulations are in good agreement with the experimental data obtained using the new method.

  18. Debiased estimates for NEO orbits, absolute magnitudes, and source regions

    NASA Astrophysics Data System (ADS)

    Granvik, Mikael; Morbidelli, Alessandro; Jedicke, Robert; Bolin, Bryce T.; Bottke, William; Beshore, Edward C.; Vokrouhlicky, David; Nesvorny, David; Michel, Patrick

    2017-10-01

    The debiased absolute-magnitude and orbit distributions as well as source regions for near-Earth objects (NEOs) provide a fundamental frame of reference for studies on individual NEOs as well as on more complex population-level questions. We present a new four-dimensional model of the NEO population that describes debiased steady-state distributions of semimajor axis (a), eccentricity (e), inclination (i), and absolute magnitude (H). We calibrate the model using NEO detections by the 703 and G96 stations of the Catalina Sky Survey (CSS) during 2005-2012 corresponding to objects with 17

  19. Stability metrics for multi-source biomedical data based on simplicial projections from probability distribution distances.

    PubMed

    Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M

    2017-02-01

    Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.

  20. SU-F-T-336: A Quick Auto-Planning (QAP) Method for Patient Intensity Modulated Radiotherapy (IMRT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, J; Zhang, Z; Wang, J

    2016-06-15

    Purpose: The aim of this study is to develop a quick auto-planning system that permits fast patient IMRT planning with conformal dose to the target without manual field alignment and time-consuming dose distribution optimization. Methods: The planning target volume (PTV) of the source and the target patient were projected to the iso-center plane in certain beameye- view directions to derive the 2D projected shapes. Assuming the target interior was isotropic for each beam direction boundary analysis under polar coordinate was performed to map the source shape boundary to the target shape boundary to derive the source-to-target shape mapping function. Themore » derived shape mapping function was used to morph the source beam aperture to the target beam aperture over all segments in each beam direction. The target beam weights were re-calculated to deliver the same dose to the reference point (iso-center) as the source beam did in the source plan. The approach was tested on two rectum patients (one source patient and one target patient). Results: The IMRT planning time by QAP was 5 seconds on a laptop computer. The dose volume histograms and the dose distribution showed the target patient had the similar PTV dose coverage and OAR dose sparing with the source patient. Conclusion: The QAP system can instantly and automatically finish the IMRT planning without dose optimization.« less

  1. Lifestyle-Adjusted Function: Variation beyond BADL and IADL Competencies

    ERIC Educational Resources Information Center

    Albert, Steven M.; Bear-Lehman, Jane; Burkhardt, Ann

    2009-01-01

    Purpose: Using the Activity Card Sort (ACS), we derived a measure of lifestyle-adjusted function and examined the distribution of this measure and its correlates in a community sample of older adults at risk for disability transitions. Design and Methods: Participants in the Sources of Independence in the Elderly project (n = 375) completed the…

  2. deFUME: Dynamic exploration of functional metagenomic sequencing data.

    PubMed

    van der Helm, Eric; Geertz-Hansen, Henrik Marcus; Genee, Hans Jasper; Malla, Sailesh; Sommer, Morten Otto Alexander

    2015-07-31

    Functional metagenomic selections represent a powerful technique that is widely applied for identification of novel genes from complex metagenomic sources. However, whereas hundreds to thousands of clones can be easily generated and sequenced over a few days of experiments, analyzing the data is time consuming and constitutes a major bottleneck for experimental researchers in the field. Here we present the deFUME web server, an easy-to-use web-based interface for processing, annotation and visualization of functional metagenomics sequencing data, tailored to meet the requirements of non-bioinformaticians. The web-server integrates multiple analysis steps into one single workflow: read assembly, open reading frame prediction, and annotation with BLAST, InterPro and GO classifiers. Analysis results are visualized in an online dynamic web-interface. The deFUME webserver provides a fast track from raw sequence to a comprehensive visual data overview that facilitates effortless inspection of gene function, clustering and distribution. The webserver is available at cbs.dtu.dk/services/deFUME/and the source code is distributed at github.com/EvdH0/deFUME.

  3. Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations

    NASA Astrophysics Data System (ADS)

    Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET

    2017-09-01

    The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.

  4. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    NASA Astrophysics Data System (ADS)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  5. Energy spectra unfolding of fast neutron sources using the group method of data handling and decision tree algorithms

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Abolfazl; Afrakoti, Iman Esmaili Paeen

    2017-04-01

    Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The 241Am-9Be and 252Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions.

  6. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-08

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.

  7. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data.

    PubMed

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.

  8. Comparison of the bidirectional reflectance distribution function of various surfaces

    NASA Astrophysics Data System (ADS)

    Fernandez, Rene; Seasholtz, Richard G.; Oberle, Lawrence G.; Kadambi, Jaikrishnan R.

    1989-04-01

    This paper describes the development and use of a system to measure the bidirectional reflectance distribution function (BRDF) of various surfaces. The BRDF measurements are to be used in the analysis and design of optical measurement systems such as laser anemometers. An Ar-ion laser (514 nm) was the light source. Preliminary results are presented for eight samples: two glossy black paints, two flat black paints, black glass, sand-blasted Al, unworked Al, and a white paint. A BaSO4 white reflectance standard was used as the reference sample throughout the tests.

  9. Correlating Near-Source Rock Damage from Single-Hole Explosions to Seismic Waves (Postprint)

    DTIC Science & Technology

    2012-05-07

    Technical Paper APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED. AIR FORCE RESEARCH LABORATORY Space Vehicles Directorate...Space Vehicles Directorate 3550 Aberdeen Ave SE 3550 Aberdeen Ave SE 11. SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776...function of pressure, • Fluid permeability as a function of pressure, • Electrical resistivity as a function of pressure, and • Rock strength. The

  10. Seasonal Variability of Middle Latitude Ozone in the Lowermost Stratosphere Derived from Probability Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C..; Nielsen, J. E.

    1999-01-01

    We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE] on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [about 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.

  11. Seasonal Variability of Middle Latitude Ozone in the Lowermost Stratosphere Derived from Probability Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C.; Nielsen, J. E.

    1999-01-01

    We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE) on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [approximately 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.

  12. Leptospirosis risk around a potential source of infection

    NASA Astrophysics Data System (ADS)

    Loaiza-Echeverry, Erica; Hincapié-Palacio, Doracelly; Ochoa Acosta, Jesús; Ospina Giraldo, Juan

    2015-05-01

    Leptospirosis is a bacterial zoonosis with world distribution and multiform clinical spectrum in men and animals. The etiology of this disease is the pathogenic species of Leptospira, which cause diverse manifestations of the disease, from mild to serious, such as the Weil disease and the lung hemorrhagic syndrome with lethal proportions of 10% - 50%. This is an emerging problem of urban health due to the growth of marginal neighborhoods without basic sanitary conditions and an increased number of rodents. The presence of rodents and the probability of having contact with their urine determine the likelihood for humans to get infected. In this paper, we simulate the spatial distribution of risk infection of human leptospirosis according to the proximity to rodent burrows considered as potential source of infection. The Bessel function K0 with an r distance from the potential point source, and the scale parameter α in meters was used. Simulation inputs were published data of leptospirosis incidence rate (range of 5 to 79 x 10 000), and a distance of 100 to 5000 meters from the source of infection. We obtained an adequate adjustment between the function and the simulated data. The risk of infection increases with the proximity of the potential source. This estimation can become a guide to propose effective measures of control and prevention.

  13. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.

    PubMed

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-07

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  14. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  15. Citation analytics: Data exploration and comparative analyses of CiteScores of Open Access and Subscription-Based publications indexed in Scopus (2014-2016).

    PubMed

    Atayero, Aderemi A; Popoola, Segun I; Egeonu, Jesse; Oludayo, Olumuyiwa

    2018-08-01

    Citation is one of the important metrics that are used in measuring the relevance and the impact of research publications. The potentials of citation analytics may be exploited to understand the gains of publishing scholarly peer-reviewed research outputs in either Open Access (OA) sources or Subscription-Based (SB) sources in the bid to increase citation impact. However, relevant data required for such comparative analysis must be freely accessible for evidence-based findings and conclusions. In this data article, citation scores ( CiteScores ) of 2542 OA sources and 15,040 SB sources indexed in Scopus from 2014 to 2016 were presented and analyzed based on a set of five inclusion criteria. A robust dataset, which contains the CiteScores of OA and SB publication sources included, is attached as supplementary material to this data article to facilitate further reuse. Descriptive statistics and frequency distributions of OA CiteScores and SB CiteScores are presented in tables. Boxplot representations and scatter plots are provided to show the statistical distributions of OA CiteScores and SB CiteScores across the three sub-categories (Book Series, Journal, and Trade Journal). Correlation coefficient and p-value matrices are made available within the data article. In addition, Probability Density Functions (PDFs) and Cumulative Distribution Functions (CDFs) of OA CiteScores and SB CiteScores are computed and the results are presented using tables and graphs. Furthermore, Analysis of Variance (ANOVA) and multiple comparison post-hoc tests are conducted to understand the statistical difference (and its significance, if any) in the citation impact of OA publication sources and SB publication source based on CiteScore . In the long run, the data provided in this article will help policy makers and researchers in Higher Education Institutions (HEIs) to identify the appropriate publication source type and category for dissemination of scholarly research findings with maximum citation impact.

  16. Aerial Measuring System Sensor Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. S. Detwiler

    2002-04-01

    This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimatingmore » detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 {micro}Ci/m{sup 2}. The helicopter calculations modeled the transport of americium-241 ({sup 241}Am) as this is the ''marker'' isotope utilized by the system for Pu detection. The helicopter sensor array consists of 2 six-element NaI detector pods, and the NaI pod detector response was simulated for a distributed surface source of {sup 241}Am as a function of altitude.« less

  17. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    NASA Astrophysics Data System (ADS)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  18. Potential Sources of Polarized Light from a Plant Canopy

    NASA Technical Reports Server (NTRS)

    Vanderbilt, Vern; Daughtry, Craig; Dahlgren, Robert

    2016-01-01

    Field measurements have demonstrated that sunlight polarized during a first surface reflection by shiny leaves dominates the optical polarization of the light reflected by shiny-leafed plant canopies having approximately spherical leaf angle probability density functions ("Leaf Angle Distributions" - LAD). Yet for other canopies - specifically those without shiny leaves and/or spherical LADs - potential sources of optically polarized light may not always be obvious. Here we identify possible sources of polarized light within those other canopies and speculate on the ecologically important information polarization measurements of those sources might contain.

  19. Monitoring and control requirement definition study for Dispersed Storage and Generation (DSG), volume 1

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Twenty-four functional requirements were prepared under six categories and serve to indicate how to integrate dispersed storage generation (DSG) systems with the distribution and other portions of the electric utility system. Results indicate that there are no fundamental technical obstacles to prevent the connection of dispersed storage and generation to the distribution system. However, a communication system of some sophistication is required to integrate the distribution system and the dispersed generation sources for effective control. The large-size span of generators from 10 KW to 30 MW means that a variety of remote monitoring and control may be required. Increased effort is required to develop demonstration equipment to perform the DSG monitoring and control functions and to acquire experience with this equipment in the utility distribution environment.

  20. On the continuity of the stationary state distribution of DPCM

    NASA Astrophysics Data System (ADS)

    Naraghi-Pour, Morteza; Neuhoff, David L.

    1990-03-01

    Continuity and singularity properties of the stationary state distribution of differential pulse code modulation (DPCM) are explored. Two-level DPCM (i.e., delta modulation) operating on a first-order autoregressive source is considered, and it is shown that, when the magnitude of the DPCM prediciton coefficient is between zero and one-half, the stationary state distribution is singularly continuous; i.e., it is not discrete but concentrates on an uncountable set with a Lebesgue measure of zero. Consequently, it cannot be represented with a probability density function. For prediction coefficients with magnitude greater than or equal to one-half, the distribution is pure, i.e., either absolutely continuous and representable with a density function, or singular. This problem is compared to the well-known and still substantially unsolved problem of symmetric Bernoulli convolutions.

  1. Bioactive phytochemicals in wheat: Extraction, analysis, processing, and functional properties

    USDA-ARS?s Scientific Manuscript database

    Whole wheat provides a rich source of bioactive phytochemicals namely, phenolic acids, carotenoids, tocopherols, alkylresorcinols, arabinoxylans, benzoxazinoids, phytosterols, and lignans. This review provides information on the distribution, extractability, analysis, and nutraceutical properties of...

  2. Transfer function analysis of thermospheric perturbations

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.

    1986-01-01

    Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.

  3. Transfer function analysis of thermospheric perturbations

    NASA Astrophysics Data System (ADS)

    Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.

    1986-06-01

    Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.

  4. Resolution analysis of finite fault source inversion using one- and three-dimensional Green's functions 2. Combining seismic and geodetic data

    USGS Publications Warehouse

    Wald, D.J.; Graves, R.W.

    2001-01-01

    Using numerical tests for a prescribed heterogeneous earthquake slip distribution, we examine the importance of accurate Green's functions (GF) for finite fault source inversions which rely on coseismic GPS displacements and leveling line uplift alone and in combination with near-source strong ground motions. The static displacements, while sensitive to the three-dimensional (3-D) structure, are less so than seismic waveforms and thus are an important contribution, particularly when used in conjunction with waveform inversions. For numerical tests of an earthquake source and data distribution modeled after the 1994 Northridge earthquake, a joint geodetic and seismic inversion allows for reasonable recovery of the heterogeneous slip distribution on the fault. In contrast, inaccurate 3-D GFs or multiple 1-D GFs allow only partial recovery of the slip distribution given strong motion data alone. Likewise, using just the GPS and leveling line data requires significant smoothing for inversion stability, and hence, only a blurred vision of the prescribed slip is recovered. Although the half-space approximation for computing the surface static deformation field is no longer justifiable based on the high level of accuracy for current GPS data acquisition and the computed differences between 3-D and half-space surface displacements, a layered 1-D approximation to 3-D Earth structure provides adequate representation of the surface displacement field. However, even with the half-space approximation, geodetic data can provide additional slip resolution in the joint seismic and geodetic inversion provided a priori fault location and geometry are correct. Nevertheless, the sensitivity of the static displacements to the Earth structure begs caution for interpretation of surface displacements, particularly those recorded at monuments located in or near basin environments. Copyright 2001 by the American Geophysical Union.

  5. Retrieval of Garstang's emission function from all-sky camera images

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  6. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source.

    PubMed

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K

    2015-10-15

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  8. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    2010-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  9. On the Anisotropy of the He+, C+, O+, and Ne+ Pickup Ion Velocity Distribution Function: STEREO PLASTIC Observations

    NASA Astrophysics Data System (ADS)

    Taut, A.; Drews, C.; Berger, L.; Peleikis, T.; Wimmer-Schweingruber, R. F.

    2015-12-01

    PickUp Ions (PUIs) are typically characterized by (1) their almost exclusively single charge state, (2) a highly non-thermal and anisotropic Velocity Distribution Function (VDF) [Drews et al., 2015], and (3) an extended source population of neutral atoms somewhere between the observer and the Sun. The origin of pickup ions ranges from sources only several solar radii away from the Sun, the so-called inner-source of pickup ions, up to a distance of several hundreds of astronomical units, the local interstellar medium. Their continuous production inside the heliosphere and complex interactions with the magnetized solar wind plasma leads to the development of non-thermal, anisotropic features of both the solar wind and pickup ion velocity distribution functions. In this study, we present observations of the VDF of He+, C+, N+, O+ and Ne+ pickup ions with PLASTIC on STEREO A. We have found a PUI flux increase during perpendicular configurations of the local magnetic field that is generally linked to the existence of a so-called torus-distribution [Drews et al., 2015] which is attributed to the production of PUIs close to the observer. A comparison of the PUI VDF between radial and perpendicular configurations of the local magnetic field vector is used to quantify the anisotropy of the PUI VDF and thereby enables us to estimate the mean free path for pitch-angle scattering of He, C, N, O and Ne pickup ions without the necessity of an over-simplified heliospheric model to describe the PUI phase space transport. Our results show a clear signature of a C+ torus signature at 1 AU as well as significant differences between the anisotropies of the He+ and O+ VDF. We will discuss our results in the light of recent studies about the nature of the inner-source of PUIs [Berger et al., 2015] and observations of the 2D VDF of He+[Drews et al., 2015]. Figure Caption: Velocity space diagrams of a pickup ion torus distribution as a (vx-vy)-projection (top left panel) and in the vz = 0 km/s plane (top right) are shown for magnetic configuration in which B is almost perpendicular. The bottom two panels show the torus distribution under the influence of pitch-angle scattering (right) and adiabatic cooling (left). To illustrate the torus character of the distribution the (vx-vy)-plane is slightly tilted in this diagram.

  10. Monte Carlo Determination of Dosimetric Parameters of a New (125)I Brachytherapy Source According to AAPM TG-43 (U1) Protocol.

    PubMed

    Baghani, Hamid Reza; Lohrabian, Vahid; Aghamiri, Mahmoud Reza; Robatjazi, Mostafa

    2016-03-01

    (125)I is one of the important sources frequently used in brachytherapy. Up to now, several different commercial models of this source type have been introduced to the clinical radiation oncology applications. Recently, a new source model, IrSeed-125, has been added to this list. The aim of the present study is to determine the dosimetric parameters of this new source model based on the recommendations of TG-43 (U1) protocol using Monte Carlo simulation. The dosimetric characteristics of Ir-125 including dose rate constant, radial dose function, 2D anisotropy function and 1D anisotropy function were determined inside liquid water using MCNPX code and compared to those of other commercially available iodine sources. The dose rate constant of this new source was found to be 0.983+0.015 cGyh-1U-1 that was in good agreement with the TLD measured data (0.965 cGyh-1U-1). The 1D anisotropy function at 3, 5, and 7 cm radial distances were obtained as 0.954, 0.953 and 0.959, respectively. The results of this study showed that the dosimetric characteristics of this new brachytherapy source are comparable with those of other commercially available sources. Furthermore, the simulated parameters were in accordance with the previously measured ones. Therefore, the Monte Carlo calculated dosimetric parameters could be employed to obtain the dose distribution around this new brachytherapy source based on TG-43 (U1) protocol.

  11. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    NASA Astrophysics Data System (ADS)

    Karamehmedović, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-06-01

    We consider the multi-frequency inverse source problem for the scalar Helmholtz equation in the plane. The goal is to reconstruct the source term in the equation from measurements of the solution on a surface outside the support of the source. We study the problem in a certain finite dimensional setting: from measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier–Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction, and under an additional, mild assumption, the reconstruction method is shown to be stable. Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method is implemented numerically and our theoretical findings are supported by numerical experiments.

  12. From dust to light: a study of star formation in NGC2264

    NASA Astrophysics Data System (ADS)

    Teixeira, P. S.

    2008-10-01

    The goal of this dissertation is to characterize the star formation history of the young cluster NGC2264 using the unique observational capabilities of the Spitzer Space Telescope. The motivation to conduct this study stems from the fact that most stars are formed within clusters, so the formation and evolution of the latter will effect the stellar mass distribution in the field. Detailed observational studies of young stellar clusters are therefore crucial to provide necessary constraints for theoretical models of cloud and cluster formation and evolution. This study also addresses the evolution of circumstellar disks in NGC2264; empirical knowledge of protoplanetary disk evolution is required for the understanding of how planetary systems such as our own form. The first result obtained from this study was both completely new and unexpected. A dense region within NGC2264 was found to be teeming with bright 24 μm Class I protostars; these sources are embedded within dense submillimeter cores and are spatially distributed along dense filamentary fingers of gas and dust that radially converge on a B-type binary Class I source. This cluster of protostars was baptized the "Spokes cluster" and its analysis provided further insight into the role of thermal support during core formation, collapse and fragmentation. The nearest neighbor projected separation distribution of these Class I sources shows a characteristic spacing that is similar to the Jeans length for the region, indicating that the dusty filaments may have undergone thermal fragmentation. The submillimeter cores of the Spokes cluster were observed at 230GHz using the SubMillimeter Array (SMA) and the resulting high resolution (~1.3") continuum observations revealed a dense grouping of 7 Class 0 sources embedded within a particular core, D-MM1 (~20"x20"). The compact sources have masses ranging between 0.4M and 1.2M, and radii of ~600AU. The mean separation of the Class 0 sources within D-MM1 is considerably smaller than the characteristic spacing between the Class I sources in the larger Spokes cluster and is consistent with hierarchical thermal fragmentation of the dense molecular gas in this region. The results obtained by the study of the Spokes cluster show that the spatial substructuring of a cluster or subcluster is correlated with age, i.e., groupings of very young protostars have clearly more concentrated and substructured spatial distributions. The Spokes cluster could thus be one of several building blocks of NGC2264, and will likely expand and disperse its members through the surrounding region, adding to the rest of NGC2264's stellar population.To further explore this scenario, I identified Pre-Main Sequence (PMS) disk bearing sources in the whole region of NGC2264, as surveyed by InfraRed Array Camera (IRAC) analyzing both their spatial distributions and ages. Of the 1404 sources detected in all four IRAC bands, 116 sources were found to have anemic IRAC disks and 217 sources were found to have thick IRAC disks; the disk fraction was calculated to be 37.5%±6.3% and found to be a function of spectral type, increasing for later type sources. I identified 4 candidate sources with transition disks (disks with inner holes), as well as 6 sources with anemic inner disks and thick outer disks that could be the immediate precursors of transition disks. This is a relevant result for it suggests planet formation may be occurring in the inner disk at very early ages. I found that the spatial distribution of the disk-bearing sources was a function of both disk type and amount of reddening. This spatial analysis enabled the identification of three groups of sources, namely, (i) embedded (AV> 3 magnitudes) sources with thick disks, (ii) unembedded sources with thick disks, and (iii) sources with anemic disks. The first group was found to have a median age of 1 Myr and its spatial distribution is highly concentrated and substructured. The second group, (ii), has a median age of 2 Myr and its spatial distribution is less concentrated and substructured than group (i), but more than the group of sources with anemic disks - the spatial distribution of this third group (age ~ 2 Myr) is not substructured and is more distributed, showing no particular peak or concentration. The star formation history of NGC2264 appears to be as follows: the northern region appears to have undergone the first epoch or episode of star formation, while the second epoch is currently occurring in the center (Spokes cluster) and south (near Allen's source). Status: RO

  13. Magnetoacoustic Tomography with Magnetic Induction: Bioimepedance reconstruction through vector source imaging

    PubMed Central

    Mariappan, Leo; He, Bin

    2013-01-01

    Magneto acoustic tomography with magnetic induction (MAT-MI) is a technique proposed to reconstruct the conductivity distribution in biological tissue at ultrasound imaging resolution. A magnetic pulse is used to generate eddy currents in the object, which in the presence of a static magnetic field induces Lorentz force based acoustic waves in the medium. This time resolved acoustic waves are collected with ultrasound transducers and, in the present work, these are used to reconstruct the current source which gives rise to the MAT-MI acoustic signal using vector imaging point spread functions. The reconstructed source is then used to estimate the conductivity distribution of the object. Computer simulations and phantom experiments are performed to demonstrate conductivity reconstruction through vector source imaging in a circular scanning geometry with a limited bandwidth finite size piston transducer. The results demonstrate that the MAT-MI approach is capable of conductivity reconstruction in a physical setting. PMID:23322761

  14. Kinetic modeling of particle dynamics in H- negative ion sources (invited)

    NASA Astrophysics Data System (ADS)

    Hatayama, A.; Shibata, T.; Nishioka, S.; Ohta, M.; Yasumoto, M.; Nishida, K.; Yamamoto, T.; Miyamoto, K.; Fukano, A.; Mizuno, T.

    2014-02-01

    Progress in the kinetic modeling of particle dynamics in H- negative ion source plasmas and their comparisons with experiments are reviewed, and discussed with some new results. Main focus is placed on the following two topics, which are important for the research and development of large negative ion sources and high power H- ion beams: (i) Effects of non-equilibrium features of EEDF (electron energy distribution function) on H- production, and (ii) extraction physics of H- ions and beam optics.

  15. Solving the multi-frequency electromagnetic inverse source problem by the Fourier method

    NASA Astrophysics Data System (ADS)

    Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi

    2018-07-01

    This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.

  16. Self-constrained inversion of potential fields

    NASA Astrophysics Data System (ADS)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  17. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data

    PubMed Central

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages. PMID:21253357

  18. Three-energy focusing Laue monochromator for the diamond light source x-ray pair distribution function beamline I15-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutter, John P., E-mail: john.sutter@diamond.ac.uk; Chater, Philip A.; Hillman, Michael R.

    2016-07-27

    The I15-1 beamline, the new side station to I15 at the Diamond Light Source, will be dedicated to the collection of atomic pair distribution function data. A Laue monochromator will be used consisting of three silicon crystals diffracting X-rays at a common Bragg angle of 2.83°. The crystals use the (1 1 1), (2 2 0), and (3 1 1) planes to select 40, 65, and 76 keV X-rays, respectively, and will be bent meridionally to horizontally focus the selected X-rays onto the sample. All crystals will be cut to the same optimized asymmetry angle in order to eliminate imagemore » broadening from the crystal thickness. Finite element calculations show that the thermal distortion of the crystals will affect the image size and bandpass.« less

  19. Numerical evaluation of longitudinal motions of Wigley hulls advancing in waves by using Bessho form translating-pulsating source Green'S function

    NASA Astrophysics Data System (ADS)

    Xiao, Wenbin; Dong, Wencai

    2016-06-01

    In the framework of 3D potential flow theory, Bessho form translating-pulsating source Green's function in frequency domain is chosen as the integral kernel in this study and hybrid source-and-dipole distribution model of the boundary element method is applied to directly solve the velocity potential for advancing ship in regular waves. Numerical characteristics of the Green function show that the contribution of local-flow components to velocity potential is concentrated at the nearby source point area and the wave component dominates the magnitude of velocity potential in the far field. Two kinds of mathematical models, with or without local-flow components taken into account, are adopted to numerically calculate the longitudinal motions of Wigley hulls, which demonstrates the applicability of translating-pulsating source Green's function method for various ship forms. In addition, the mesh analysis of discrete surface is carried out from the perspective of ship-form characteristics. The study shows that the longitudinal motion results by the simplified model are somewhat greater than the experimental data in the resonant zone, and the model can be used as an effective tool to predict ship seakeeping properties. However, translating-pulsating source Green function method is only appropriate for the qualitative analysis of motion response in waves if the ship geometrical shape fails to satisfy the slender-body assumption.

  20. Imaging the complex geometry of a magma reservoir using FEM-based linear inverse modeling of InSAR data: application to Rabaul Caldera, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan

    2017-06-01

    We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the east side and of the past Vulcan volcano eruptions of more evolved materials on the west side. The interconnection and spatial distributions of sources correspond to the petrography of the volcanic products described in the literature and to the dynamics of the single and twin eruptions that characterize the caldera. The ability to image the complex geometry of deformation sources in both space and time can improve our ability to monitor active volcanoes, widen our understanding of the dynamics of active volcanic systems and improve the predictions of eruptions.

  1. Simultaneous source and attenuation reconstruction in SPECT using ballistic and single scattering data

    NASA Astrophysics Data System (ADS)

    Courdurier, M.; Monard, F.; Osses, A.; Romero, F.

    2015-09-01

    In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.

  2. Skyshine study for next generation of fusion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Yang, S.

    1987-02-01

    A shielding analysis for next generation of fusion devices (ETR/INTOR) was performed to study the dose equivalent outside the reactor building during operation including the contribution from neutrons and photons scattered back by collisions with air nuclei (skyshine component). Two different three-dimensional geometrical models for a tokamak fusion reactor based on INTOR design parameters were developed for this study. In the first geometrical model, the reactor geometry and the spatial distribution of the deuterium-tritium neutron source were simplified for a parametric survey. The second geometrical model employed an explicit representation of the toroidal geometry of the reactor chamber and themore » spatial distribution of the neutron source. The MCNP general Monte Carlo code for neutron and photon transport was used to perform all the calculations. The energy distribution of the neutron source was used explicitly in the calculations with ENDF/B-V data. The dose equivalent results were analyzed as a function of the concrete roof thickness of the reactor building and the location outside the reactor building.« less

  3. The innovative concept of three-dimensional hybrid receptor modeling

    NASA Astrophysics Data System (ADS)

    Stojić, A.; Stanišić Stojić, S.

    2017-09-01

    The aim of this study was to improve the current understanding of air pollution transport processes at regional and long-range scale. For this purpose, three-dimensional (3D) potential source contribution function and concentration weighted trajectory models, as well as new hybrid receptor model, concentration weighted boundary layer (CWBL), which uses a two-dimensional grid and a planetary boundary layer height as a frame of reference, are presented. The refined approach to hybrid receptor modeling has two advantages. At first, it considers whether each trajectory endpoint meets the inclusion criteria based on planetary boundary layer height, which is expected to provide a more realistic representation of the spatial distribution of emission sources and pollutant transport pathways. Secondly, it includes pollutant time series preprocessing to make hybrid receptor models more applicable for suburban and urban locations. The 3D hybrid receptor models presented herein are designed to identify altitude distribution of potential sources, whereas CWBL can be used for analyzing the vertical distribution of pollutant concentrations along the transport pathway.

  4. 3D ion velocity distribution function measurement in an electric thruster using laser induced fluorescence tomography

    NASA Astrophysics Data System (ADS)

    Elias, P. Q.; Jarrige, J.; Cucchetti, E.; Cannat, F.; Packan, D.

    2017-09-01

    Measuring the full ion velocity distribution function (IVDF) by non-intrusive techniques can improve our understanding of the ionization processes and beam dynamics at work in electric thrusters. In this paper, a Laser-Induced Fluorescence (LIF) tomographic reconstruction technique is applied to the measurement of the IVDF in the plume of a miniature Hall effect thruster. A setup is developed to move the laser axis along two rotation axes around the measurement volume. The fluorescence spectra taken from different viewing angles are combined using a tomographic reconstruction algorithm to build the complete 3D (in phase space) time-averaged distribution function. For the first time, this technique is used in the plume of a miniature Hall effect thruster to measure the full distribution function of the xenon ions. Two examples of reconstructions are provided, in front of the thruster nose-cone and in front of the anode channel. The reconstruction reveals the features of the ion beam, in particular on the thruster axis where a toroidal distribution function is observed. These findings are consistent with the thruster shape and operation. This technique, which can be used with other LIF schemes, could be helpful in revealing the details of the ion production regions and the beam dynamics. Using a more powerful laser source, the current implementation of the technique could be improved to reduce the measurement time and also to reconstruct the temporal evolution of the distribution function.

  5. Electromagnetic cyclotron-loss-cone instability associated with weakly relativistic electrons

    NASA Technical Reports Server (NTRS)

    Wong, H. K.; Wu, C. S.; Ke, F. J.; Schneider, R. S.; Ziebell, L. F.

    1982-01-01

    The amplification of fast extraordinary mode waves at frequencies very close to the electron cyclotron frequency, due to the presence of a population of energetic electrons with a loss-cone type distribution, is studied. Low-energy background electrons are included in the analysis. Two types of loss-cone distribution functions are considered, and it is found that the maximum growth rates for both distribution functions are of the same order of magnitude. When the thermal effects of the energetic electrons are included in the dispersion equation, the real frequencies of the waves are lower than those obtained by using the cold plasma approximation. This effect tends to enhance the growth rate. An idealized case including a parallel electric field such that the distribution function of the trapped energetic electrons is modified is also considered. It is assumed that the parallel electric field can remove the low-energy background electrons away from the source region of radiation. Both these effects increase the growth rate.

  6. Modeling Magnetotail Ion Distributions with Global Magnetohydrodynamic and Ion Trajectory Calculations

    NASA Technical Reports Server (NTRS)

    El-Alaoui, M.; Ashour-Abdalla, M.; Raeder, J.; Peroomian, V.; Frank, L. A.; Paterson, W. R.; Bosqued, J. M.

    1998-01-01

    On February 9, 1995, the Comprehensive Plasma Instrumentation (CPI) on the Geotail spacecraft observed a complex, structured ion distribution function near the magnetotail midplane at x approximately -30 R(sub E). On this same day the Wind spacecraft observed a quiet solar wind and an interplanetary magnetic field (IMF) that was northward for more than five hours, and an IMF B(sub y) component with a magnitude comparable to that of the RAF B(sub z) component. In this study, we determined the sources of the ions in this distribution function by following approximately 90,000 ion trajectories backward in time, using the time-dependent electric and magnetic fields obtained from a global MHD simulation. The Wind observations were used as input for the MHD model. The ion distribution function observed by Geotail at 1347 UT was found to consist primarily of particles from the dawn side low latitude boundary layer (LLBL) and from the dusk side LLBL; fewer than 2% of the particles originated in the ionosphere.

  7. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model.

    PubMed

    Hiatt, Jessica R; Davis, Stephen D; Rivard, Mark J

    2015-06-01

    The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Monte Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10(10) histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an (125)I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.

  8. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiatt, Jessica R.; Davis, Stephen D.; Rivard, Mark J., E-mail: mark.j.rivard@gmail.com

    2015-06-15

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Methods: Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Montemore » Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. Results: The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an {sup 125}I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.« less

  9. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza

    The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less

  10. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    DOE PAGES

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...

    2016-07-26

    The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less

  11. Adiabatic description of long range frequency sweeping

    NASA Astrophysics Data System (ADS)

    Breizman, Boris; Nyqvist, Robert; Lilley, Matthew

    2012-10-01

    A theoretical framework is developed to describe long range frequency sweeping events in the 1D electrostatic bump-on-tail model with fast particle sources and collisions. The model includes three collision operators (Krook, drag (dynamical friction) and velocity space diffusion), and allows for a general shape of the fast particle distribution function. The behavior of phase space holes and clumps is analyzed, and the effect of particle trapping due to separatrix expansion is discussed. With a fast particle distribution function whose slope decays above the resonant phase velocity, hooked frequency sweeping is found for holes in the presence of drag collisions alone.

  12. Comparison of the bidirectional reflectance distribution function of various surfaces

    NASA Technical Reports Server (NTRS)

    Fernandez, Rene; Seasholtz, Richard G.; Oberle, Lawrence G.; Kadambi, Jaikrishnan R.

    1988-01-01

    Described is the development and use of a system to measure the Bidirectional Reflectance Distribution Function (BRDF) of various surfaces. The BRDF measurements are used in the analysis and design of optical measurement systems, such as laser anemometers. An argon ion laser (514 nm) is the light source. Preliminary results are presented for eight samples: two glossy black paints, two flat black paints, black glass, sand blasted aluminum, unworked aluminum, and a white paint. A BaSO4 white reflectance standard was used as the reference sample throughout the tests. The reflectance characteristics of these surfaces are compared.

  13. Nonequilibrium approach regarding metals from a linearised kappa distribution

    NASA Astrophysics Data System (ADS)

    Domenech-Garret, J. L.

    2017-10-01

    The widely used kappa distribution functions develop high-energy tails through an adjustable kappa parameter. The aim of this work is to show that such a parameter can itself be regarded as a function, which entangles information about the sources of disequilibrium. We first derive and analyse an expanded Fermi-Dirac kappa distribution. Later, we use this expanded form to obtain an explicit analytical expression for the kappa parameter of a heated metal on which an external electric field is applied. We show that such a kappa index causes departures from equilibrium depending on the physical magnitudes. Finally, we study the role of temperature and electric field on such a parameter, which characterises the electron population of a metal out of equilibrium.

  14. Barium-Dispenser Thermionic Cathode

    NASA Technical Reports Server (NTRS)

    Wintucky, Edwin G.; Green, M.; Feinleib, M.

    1989-01-01

    Improved reservoir cathode serves as intense source of electrons required for high-frequency and often high-output-power, linear-beam tubes, for which long operating lifetime important consideration. High emission-current densities obtained through use of emitting surface of relatively-low effective work function and narrow work-function distribution, consisting of coat of W/Os deposited by sputtering. Lower operating temperatures and enhanced electron emission consequently possible.

  15. Algorithms and physical parameters involved in the calculation of model stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Merlo, D. C.

    This contribution summarizes the Doctoral Thesis presented at Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba for the degree of PhD in Astronomy. We analyze some algorithms and physical parameters involved in the calculation of model stellar atmospheres, such as atomic partition functions, functional relations connecting gaseous and electronic pressure, molecular formation, temperature distribution, chemical compositions, Gaunt factors, atomic cross-sections and scattering sources, as well as computational codes for calculating models. Special attention is paid to the integration of hydrostatic equation. We compare our results with those obtained by other authors, finding reasonable agreement. We make efforts on the implementation of methods that modify the originally adopted temperature distribution in the atmosphere, in order to obtain constant energy flux throughout. We find limitations and we correct numerical instabilities. We integrate the transfer equation solving directly the integral equation involving the source function. As a by-product, we calculate updated atomic partition functions of the light elements. Also, we discuss and enumerate carefully selected formulae for the monochromatic absorption and dispersion of some atomic and molecular species. Finally, we obtain a flexible code to calculate model stellar atmospheres.

  16. Impact of the vaginal applicator and dummy pellets on the dosimetry parameters of Cs-137 brachytherapy source.

    PubMed

    Sina, Sedigheh; Faghihi, Reza; Meigooni, Ali S; Mehdizadeh, Simin; Mosleh Shirazi, M Amin; Zehtabian, Mehdi

    2011-05-19

    In this study, dose rate distribution around a spherical 137Cs pellet source, from a low-dose-rate (LDR) Selectron remote afterloading system used in gynecological brachytherapy, has been determined using experimental and Monte Carlo simulation techniques. Monte Carlo simulations were performed using MCNP4C code, for a single pellet source in water medium and Plexiglas, and measurements were performed in Plexiglas phantom material using LiF TLD chips. Absolute dose rate distribution and the dosimetric parameters, such as dose rate constant, radial dose functions, and anisotropy functions, were obtained for a single pellet source. In order to investigate the effect of the applicator and surrounding pellets on dosimetric parameters of the source, the simulations were repeated for six different arrangements with a single active source and five non-active pellets inside central metallic tubing of a vaginal cylindrical applicator. In commercial treatment planning systems (TPS), the attenuation effects of the applicator and inactive spacers on total dose are neglected. The results indicate that this effect could lead to overestimation of the calculated F(r,θ), by up to 7% along the longitudinal axis of the applicator, especially beyond the applicator tip. According to the results obtained in this study, in a real situation in treatment of patients using cylindrical vaginal applicator and using several active pellets, there will be a large discrepancy between the result of superposition and Monte Carlo simulations.

  17. Thermal and Nonthermal Electron-ion Bremsstrahlung Spectrum from High-Temperature Plasmas

    NASA Technical Reports Server (NTRS)

    Jung, Young-Dae

    1994-01-01

    Electron-ion bremsstrahlung radiation from high-temperature plasmas is investigated. The first- and second-order Coulomb corrections in the nonrelativistic bremsstrahlung radiation power are obtained by the Elwert-Sommerfeld factor. In this paper, two cases of the electron distributions, the thermal and nonthermal power-law distributions, are considered. The inclusion of Coulomb corrections is necessary in deducing correctly the electron distribution function from radiation data. These results provide the correct information of electron distributions in high-temperature plasmas, such as in inertial confinement fusion plasmas and in the astrophysical hot thermal and nonthermal x-ray sources.

  18. Simulation of angular and energy distributions of the PTB beta secondary standard.

    PubMed

    Faw, R E; Simons, G G; Gianakon, T A; Bayouth, J E

    1990-09-01

    Calculations and measurements have been performed to assess radiation doses delivered by the PTB Secondary Standard that employs 147Pm, 204Tl, and 90Sr:90Y sources in prescribed geometries, and features "beam-flattening" filters to assure uniformity of delivered doses within a 5-cm radius of the axis from source to detector plane. Three-dimensional, coupled, electron-photon Monte Carlo calculations, accounting for transmission through the source encapsulation and backscattering from the source mounting, led to energy spectra and angular distributions of electrons penetrating the source encapsulation that were used in the representation of pseudo sources of electrons for subsequent transport through the atmosphere, filters, and detectors. Calculations were supplemented by measurements made using bare LiF TLD chips on a thick polymethyl methacrylate phantom. Measurements using the 204Tl and 90Sr:90Y sources revealed that, even in the absence of the beam-flattening filters, delivered dose rates were very uniform radially. Dosimeter response functions (TLD:skin dose ratios) were calculated and confirmed experimentally for all three beta-particle sources and for bare LiF TLDs ranging in mass thickness from 10 to 235 mg cm-2.

  19. Cyber-Physical Trade-Offs in Distributed Detection Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.

    2010-01-01

    We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less

  20. Spectral Modeling of the EGRET 3EG Gamma Ray Sources Near the Galactic Plane

    NASA Technical Reports Server (NTRS)

    Bertsch, D. L.; Hartman, R. C.; Hunter, S. D.; Thompson, D. J.; Lin, Y. C.; Kniffen, D. A.; Kanbach, G.; Mayer-Hasselwander, H. A.; Reimer, O.; Sreekumar, P.

    1999-01-01

    The third EGRET catalog lists 84 sources within 10 deg of the Galactic Plane. Five of these are well-known spin-powered pulsars, 2 and possibly 3 others are blazars, and the remaining 74 are classified as unidentified, although 6 of these are likely to be artifacts of nearby strong sources. Several of the remaining 68 unidentified sources have been noted as having positional agreement with supernovae remnants and OB associations. Others may be radio-quiet pulsars like Geminga, and still others may belong to a totally new class of sources. The question of the energy spectral distributions of these sources is an important clue to their identification. In this paper, the spectra of the sources within 10 deg of Galactic Plane are fit with three different functional forms; a single power law, two power laws, and a power law with an exponential cutoff. Where possible, the best fit is selected with statistical tests. Twelve, and possibly an additional 5 sources, are found to have spectra that are fit by a breaking power law or by the power law with exponential cutoff function.

  1. Tracing Large-Scale Structure with Radio Sources

    NASA Astrophysics Data System (ADS)

    Lindsay, S. N.

    2015-02-01

    In this thesis, I investigate the spatial distribution of radio sources, and quantify their clustering strength over a range of redshifts, up to z _ 2:2, using various forms of the correlation function measured with data from several multi-wavelength surveys. I present the optical spectra of 30 radio AGN (S1:4 > 100 mJy) in the GAMA/H-ATLAS fields, for which emission line redshifts could be deduced, from observations of 79 target sources with the EFOSC2 spectrograph on the NTT. The mean redshift of these sources is z = 1:2; 12 were identified as quasars (40 per cent), and 6 redshifts (out of 24 targets) were found for AGN hosts to multiple radio components. While obtaining spectra for hosts of these multi-component sources is possible, their lower success rate highlights the difficulty in acheiving a redshift-complete radio sample. Taking an existing spectroscopic redshift survey (GAMA) and radio sources from the FIRST survey (S1:4 > 1 mJy), I then present a cross-matched radio sample with 1,635 spectroscopic redshifts with a median value of z = 0:34. The spatial correlation function of this sample is used to find the redshiftspace (s0) and real-space correlation lengths (r0 _ 8:2 h Mpc), and a mass bias of _1.9. Insight into the redshift-dependence of these quantities is gained by using the angular correlation function and Limber inversion to measure the same spatial clustering parameters. Photometric redshifts! from SDSS/UKIDSS are incorporated to produce a larger matched radio sample at z ' 0:48 (and low- and high-redshift subsamples at z ' 0:30 and z ' 0:65), while their redshift distribution is subtracted from that taken from the SKADS radio simulations to estimate the redshift distribution of the remaining unmatched sources (z ' 1:55). The observed bias evolution over this redshift range is compared with model predictions based on the SKADS simulations, with good agreement at low redshift. The bias found at high redshift significantly exceeds these predictions, however, suggesting a more massive population of galaxies than expected, either due to the relative proportions of different radio sources, or a greater typical halo mass for the high-redshift sources. Finally, the reliance on a model redshift distribution to reach to higher redshifts is removed, as the angular cross-correlation function is used with deep VLA data (S1:4 > 90 _Jy) and optical/IR data from VIDEO/CFHTLS (Ks < 23:5) over 1 square degree. With high-quality photometric redshifts up to z _ 4, and a high signal-to-noise clustering measurement (due to the _100,000 Ks-selected galaxies), I am able to find the bias of a matched sample of only 766 radio sources (as well as of v vi the VIDEO sources), divided into 4 redshift bins reaching a median bias at z ' 2:15. Again, at high redshift, the measured bias appears to exceed the prediction made from the SKADS simulations. Applying luminosity cuts to the radio sample at L > 1023 WHz and higher (removing any non-AGN sources), I find a bias of 8-10 at z _ 1:5, considerably higher than for the full sample, and consistent with the more numerous FRI AGN having similar mass to the FRIIs (M _ 10^14 M_), contrary to the assumptions made in the SKADS simulations. Applying this adjustment to the model bias produces a better fit to the observations for the FIRST radio sources cross-matched with GAMA/SDSS/UKIDSS, as well as for the high-redshift radio sources in VIDEO. Therefore, I have shown that we require a more robust model of the evolution of AGN, and their relation to the underlying dark matter distribution. In particular, understanding these quantities for the abundant FRI population is crucial if we are to use such sources to probe the cosmological model as has been suggested by a number of authors (e.g. Raccanelli et al., 2012; Camera et al., 2012; Ferramacho et al., 2014).

  2. Microlensing of an extended source by a power-law mass distribution

    NASA Astrophysics Data System (ADS)

    Congdon, Arthur B.; Keeton, Charles R.; Osmer, S. J.

    2007-03-01

    Microlensing promises to be a powerful tool for studying distant galaxies and quasars. As the data and models improve, there are systematic effects that need to be explored. Quasar continuum and broad-line regions may respond differently to microlensing due to their different sizes; to understand this effect, we study microlensing of finite sources by a mass function of stars. We find that microlensing is insensitive to the slope of the mass function but does depend on the mass range. For negative-parity images, diluting the stellar population with dark matter increases the magnification dispersion for small sources and decreases it for large sources. This implies that the quasar continuum and broad-line regions may experience very different microlensing in negative-parity lensed images. We confirm earlier conclusions that the surface brightness profile and geometry of the source have little effect on microlensing. Finally, we consider non-circular sources. We show that elliptical sources that are aligned with the direction of shear have larger magnification dispersions than sources with perpendicular alignment, an effect that becomes more prominent as the ellipticity increases. Elongated sources can lead to more rapid variability than circular sources, which raises the prospect of using microlensing to probe source shape.

  3. Physical transport properties of marine microplastic pollution

    NASA Astrophysics Data System (ADS)

    Ballent, A.; Purser, A.; Mendes, P. de Jesus; Pando, S.; Thomsen, L.

    2012-12-01

    Given the complexity of quantitative collection, knowledge of the distribution of microplastic pollution in many regions of the world ocean is patchy, both spatially and temporally, especially for the subsurface environment. However, with knowledge of typical hydrodynamic behavior of waste plastic material, models predicting the dispersal of pelagic and benthic plastics from land sources into the ocean are possible. Here we investigate three aspects of plastic distribution and transport in European waters. Firstly, we assess patterns in the distribution of plastics found in fluvial strandlines of the North Sea and how distribution may be related to flow velocities and distance from source. Second, we model transport of non-buoyant preproduction pellets in the Nazaré Canyon of Portugal using the MOHID system after assessing the density, settling velocity, critical and depositional shear stress characteristics of such waste plastics. Thirdly, we investigate the effect of surface turbulences and high pressures on a range of marine plastic debris categories (various densities, degradation states and shapes tested) in an experimental water column simulator tank and pressure laboratory. Plastics deposited on North Sea strandlines varied greatly spatially, as a function of material composition and distance from source. Model outputs indicated that such dense production pellets are likely transported up and down canyon as a function of tidal forces, with only very minor net down canyon movement. Behaviour of plastic fragments under turbulence varied greatly, with the dimensions of the material, as well as density, playing major determining roles. Pressure was shown to affect hydrodynamic behaviours of only low density foam plastics at pressures ≥ 60 bar.

  4. Microwave Assisted Helicon Plasmas

    NASA Astrophysics Data System (ADS)

    McKee, John; Caron, David; Jemiolo, Andrew; Scime, Earl

    2017-10-01

    The use of two (or more) rf sources at different frequencies is a common technique in the plasma processing industry to control ion energy characteristics separately from plasma generation. A similar approach is presented here with the focus on modifying the electron population in argon and helium plasmas. The plasma is generated by a helicon source at a frequency f0 = 13.56 MHz. Microwaves of frequency f1 = 2.45 GHz are then injected into the helicon source chamber perpendicular to the background magnetic field. The microwaves damp on the electrons via X-mode Electron Cyclotron Heating (ECH) at the upper hybrid resonance, providing additional energy input into the electrons. The effects of this secondary-source heating on electron density, temperature, and energy distribution function are examined and compared to helicon-only single source plasmas as well as numeric models suggesting that the heating is not evenly distributed. Optical Emission Spectroscopy (OES) is used to examine the impact of the energetic tail of the electron distribution on ion and neutral species via collisional excitation. Large enhancements of neutral spectral lines are observed in both Ar and He. While small enhancement of ion lines is seen in Ar, ion lines not normally present in He are observed during microwave injection. U.S. National Science Foundation Grant No. PHY-1360278.

  5. Method and system for producing sputtered thin films with sub-angstrom thickness uniformity or custom thickness gradients

    DOEpatents

    Folta, James A.; Montcalm, Claude; Walton, Christopher

    2003-01-01

    A method and system for producing a thin film with highly uniform (or highly accurate custom graded) thickness on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source with controlled (and generally, time-varying) velocity. In preferred embodiments, the method includes the steps of measuring the source flux distribution (using a test piece that is held stationary while exposed to the source), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of sweep velocity modulation recipes, and determining from the predicted film thickness profiles a sweep velocity modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a practical method of accurately measuring source flux distribution, and a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal sweep velocity modulation recipe to achieve a desired thickness profile on a substrate. Preferably, the computer implements an algorithm in which many sweep velocity function parameters (for example, the speed at which each substrate spins about its center as it sweeps across the source) can be varied or set to zero.

  6. Small-scale spatial variability of soil microbial community composition and functional diversity in a mixed forest

    NASA Astrophysics Data System (ADS)

    Wang, Qiufeng; Tian, Jing; Yu, Guirui

    2014-05-01

    Patterns in the spatial distribution of organisms provide important information about mechanisms that regulate the diversity and complexity of soil ecosystems. Therefore, information on spatial distribution of microbial community composition and functional diversity is urgently necessary. The spatial variability on a 26×36 m plot and vertical distribution (0-10 cm and 10-20 cm) of soil microbial community composition and functional diversity were studied in a natural broad-leaved Korean pine (Pinus koraiensis) mixed forest soil in Changbai Mountain. The phospholipid fatty acid (PLFA) pattern was used to characterize the soil microbial community composition and was compared with the community substrate utilization pattern using Biolog. Bacterial biomass dominated and showed higher variability than fungal biomass at all scales examined. The microbial biomass decreased with soil depths increased and showed less variability in lower 10-20 cm soil layer. The Shannon-Weaver index value for microbial functional diversity showed higher variability in upper 0-10 cm than lower 10-20 cm soil layer. Carbohydrates, carboxylic acids, polymers and amino acids are the main carbon sources possessing higher utilization efficiency or utilization intensity. At the same time, the four carbon source types contributed to the differentiation of soil microbial communities. This study suggests the higher diversity and complexity for this mix forest ecosystem. To determine the driving factors that affect this spatial variability of microorganism is the next step for our study.

  7. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  8. Analytical dose evaluation of neutron and secondary gamma-ray skyshine from nuclear facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, K.; Nakamura, T.

    1985-11-01

    The skyshine dose distributions of neutron and secondary gamma rays were calculated systematically using the Monte Carlo method for distances up to 2 km from the source. The energy of source neutrons ranged from thermal to 400 MeV; their emission angle from 0 to 90 deg from the ver tical was treated with a distribution of the direction cosine containing five equal intervals. Calculated dose distributions D(r) were fitted to the formula; D(r) = Q exp (-r/lambda)/r. The value of Q and lambda are slowly varied functions of energy. This formula was applied to the benchmark problems of neutron skyshinemore » from fission, fusion, and accelerator facilities, and good agreement was achieved. This formula will be quite useful for shielding designs of various nuclear facilities.« less

  9. Virtual Solar Observatory Distributed Query Construction

    NASA Technical Reports Server (NTRS)

    Gurman, J. B.; Dimitoglou, G.; Bogart, R.; Davey, A.; Hill, F.; Martens, P.

    2003-01-01

    Through a prototype implementation (Tian et al., this meeting) the VSO has already demonstrated the capability of unifying geographically distributed data sources following the Web Services paradigm and utilizing mechanisms such as the Simple Object Access Protocol (SOAP). So far, four participating sites (Stanford, Montana State University, National Solar Observatory and the Solar Data Analysis Center) permit Web-accessible, time-based searches that allow browse access to a number of diverse data sets. Our latest work includes the extension of the simple, time-based queries to include numerous other searchable observation parameters. For VSO users, this extended functionality enables more refined searches. For the VSO, it is a proof of concept that more complex, distributed queries can be effectively constructed and that results from heterogeneous, remote sources can be synthesized and presented to users as a single, virtual data product.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.

    Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less

  11. An analytical method based on multipole moment expansion to calculate the flux distribution in Gammacell-220

    NASA Astrophysics Data System (ADS)

    Rezaeian, P.; Ataenia, V.; Shafiei, S.

    2017-12-01

    In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.

  12. Wall-loss distribution of charge breeding ions in an electron cyclotron resonance ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, S. C.; Oyaizu, M.; Imai, N.

    2011-03-15

    The ion loss distribution in an electron cyclotron resonance ion source (ECRIS) was investigated to understand the element dependence of the charge breeding efficiency in an electron cyclotron resonance (ECR) charge breeder. The radioactive {sup 111}In{sup 1+} and {sup 140}Xe{sup 1+} ions (typical nonvolatile and volatile elements, respectively) were injected into the ECR charge breeder at the Tokai Radioactive Ion Accelerator Complex to breed their charge states. Their respective residual activities on the sidewall of the cylindrical plasma chamber of the source were measured after charge breeding as functions of the azimuthal angle and longitudinal position and two-dimensional distributions ofmore » ions lost during charge breeding in the ECRIS were obtained. These distributions had different azimuthal symmetries. The origins of these different azimuthal symmetries are qualitatively discussed by analyzing the differences and similarities in the observed wall-loss patterns. The implications for improving the charge breeding efficiencies of nonvolatile elements in ECR charge breeders are described. The similarities represent universal ion loss characteristics in an ECR charge breeder, which are different from the loss patterns of electrons on the ECRIS wall.« less

  13. Acoustic Source Localization via Time Difference of Arrival Estimation for Distributed Sensor Networks Using Tera-Scale Optical Core Devices

    DOE PAGES

    Imam, Neena; Barhen, Jacob

    2009-01-01

    For real-time acoustic source localization applications, one of the primary challenges is the considerable growth in computational complexity associated with the emergence of ever larger, active or passive, distributed sensor networks. These sensors rely heavily on battery-operated system components to achieve highly functional automation in signal and information processing. In order to keep communication requirements minimal, it is desirable to perform as much processing on the receiver platforms as possible. However, the complexity of the calculations needed to achieve accurate source localization increases dramatically with the size of sensor arrays, resulting in substantial growth of computational requirements that cannot bemore » readily met with standard hardware. One option to meet this challenge builds upon the emergence of digital optical-core devices. The objective of this work was to explore the implementation of key building block algorithms used in underwater source localization on the optical-core digital processing platform recently introduced by Lenslet Inc. This demonstration of considerably faster signal processing capability should be of substantial significance to the design and innovation of future generations of distributed sensor networks.« less

  14. An Ultradeep Chandra Catalog of X-Ray Point Sources in the Galactic Center Star Cluster

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenlin; Li, Zhiyuan; Morris, Mark R.

    2018-04-01

    We present an updated catalog of X-ray point sources in the inner 500″ (∼20 pc) of the Galactic center (GC), where the nuclear star cluster (NSC) stands, based on a total of ∼4.5 Ms of Chandra observations taken from 1999 September to 2013 April. This ultradeep data set offers unprecedented sensitivity for detecting X-ray sources in the GC, down to an intrinsic 2–10 keV luminosity of 1.0 × 1031 erg s‑1. A total of 3619 sources are detected in the 2–8 keV band, among which ∼3500 are probable GC sources and ∼1300 are new identifications. The GC sources collectively account for ∼20% of the total 2–8 keV flux from the inner 250″ region where detection sensitivity is the greatest. Taking advantage of this unprecedented sample of faint X-ray sources that primarily traces the old stellar populations in the NSC, we revisit global source properties, including long-term variability, cumulative spectra, luminosity function, and spatial distribution. Based on the equivalent width and relative strength of the iron lines, we suggest that in addition to the arguably predominant population of magnetic cataclysmic variables (CVs), nonmagnetic CVs contribute substantially to the detected sources, especially in the lower-luminosity group. On the other hand, the X-ray sources have a radial distribution closely following the stellar mass distribution in the NSC, but much flatter than that of the known X-ray transients, which are presumably low-mass X-ray binaries (LMXBs) caught in outburst. This, together with the very modest long-term variability of the detected sources, strongly suggests that quiescent LMXBs are a minor (less than a few percent) population.

  15. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  16. Temporal evolution of the Green's function reconstruction in the seismic coda

    NASA Astrophysics Data System (ADS)

    Clerc, V.; Roux, P.; Campillo, M.

    2013-12-01

    In presence of multiple scattering, the wavefield evolves towards an equipartitioned state, equivalent to ambient noise. CAMPILLO and PAUL (2003) reconstructed the surface wave part of the Green's function between three pairs of stations in Mexico. The data indicate that the time asymmetry between causal and acausal part of the Green's function is less pronounced when the correlation is performed in the later windows of the coda. These results on the correlation of diffuse waves provide another perspective on the reconstruction of Green function which is independent of the source distribution and which suggests that if the time of observation is long enough, a single source could be sufficient. The paper by ROUX et al. (2005) provides a theoretical frame for the reconstruction of the Green's function in a homogeneous middle. In a multiple scattering medium with a single source, scatterers behave as secondary sources according to the Huygens principle. Coda waves are relevant to multiple scattering, a regime which can be approximated by diffusion for long lapse times. We express the temporal evolution of the correlation function between two receivers as a function of the secondary sources. We are able to predict the effect of the persistence of the net flux of energy observed by CAMPILLO and PAUL (2003) in numerical simulations. This method is also effective in order to retrieve the scattering mean free path. We perform a partial reconstruction of the Green's function in a strongly scattering medium in numerical simulations. The prediction of the flux asymmetry allows defining the parts of the coda providing the same information as ambient noise cross correlation.

  17. Potency backprojection

    NASA Astrophysics Data System (ADS)

    Okuwaki, R.; Kasahara, A.; Yagi, Y.

    2017-12-01

    The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.

  18. Self-consistent current sheet structures in the quiet-time magnetotail

    NASA Technical Reports Server (NTRS)

    Holland, Daniel L.; Chen, James

    1993-01-01

    The structure of the quiet-time magnetotail is studied using a test particle simulation. Vlasov equilibria are obtained in the regime where v(D) = E(y) c/B(z) is much less than the ion thermal velocity and are self-consistent in that the current and magnetic field satisfy Ampere's law. Force balance between the plasma and magnetic field is satisfied everywhere. The global structure of the current sheet is found to be critically dependent on the source distribution function. The pressure tensor is nondiagonal in the current sheet with anisotropic temperature. A kinetic mechanism is proposed whereby changes in the source distribution results in a thinning of the current sheet.

  19. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  20. A New Approach in Generating Meteorological Forecasts for Ensemble Streamflow Forecasting using Multivariate Functions

    NASA Astrophysics Data System (ADS)

    Khajehei, S.; Madadgar, S.; Moradkhani, H.

    2014-12-01

    The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).

  1. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  2. What are the Shapes of Response Time Distributions in Visual Search?

    PubMed Central

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905

  3. Aureole radiance field about a source in a scattering-absorbing medium.

    PubMed

    Zachor, A S

    1978-06-15

    A technique is described for computing the aureole radiance field about a point source in a medium that absorbs and scatters according to an arbitrary phase function. When applied to an isotropic source in a homogenous medium, the method uses a double-integral transform which is evaluated recursively to obtain the aureole radiances contributed by successive scattering orders, as in the Neumann solution of the radiative transfer equation. The normalized total radiance field distribution and the variation of flux with field of view and range are given for three wavelengths in the uv and one in the visible, for a sea-level model atmosphere assumed to scatter according to a composite of the Rayleigh and modified Henyey-Greenstein phase functions. These results have application to the detection and measurement of uncollimated uv and visible sources at short ranges in the lower atmosphere.

  4. A GIS-based time-dependent seismic source modeling of Northern Iran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  5. SU-E-T-284: Revisiting Reference Dosimetry for the Model S700 Axxent 50 KV{sub p} Electronic Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiatt, JR; Rivard, MJ

    2014-06-01

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devisemore » the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source.« less

  6. Adiabatic description of long range frequency sweeping

    NASA Astrophysics Data System (ADS)

    Nyqvist, R. M.; Lilley, M. K.; Breizman, B. N.

    2012-09-01

    A theoretical framework is developed to describe long range frequency sweeping events in the 1D electrostatic bump-on-tail model with fast particle sources and collisions. The model includes three collision operators (Krook, drag (dynamical friction) and velocity space diffusion), and allows for a general shape of the fast particle distribution function. The behaviour of phase space holes and clumps is analysed in the absence of diffusion, and the effect of particle trapping due to separatrix expansion is discussed. With a fast particle distribution function whose slope decays above the resonant phase velocity, hooked frequency sweeping is found for holes in the presence of drag collisions alone.

  7. The social welfare function and individual responsibility: some theoretical issues and empirical evidence.

    PubMed

    Dolan, Paul; Tsuchiya, Aki

    2009-01-01

    The literature on income distribution has attempted to evaluate different degrees of inequality using a social welfare function (SWF) approach. However, it has largely ignored the source of such inequalities, and has thus failed to consider different degrees of inequity. The literature on egalitarianism has addressed issues of equity, largely in relation to individual responsibility. This paper builds upon these two literatures, and introduces individual responsibility into the SWF. Results from a small-scale study of people's preferences in relation to the distribution of health benefits are presented to illustrate how the parameter values of a SWF might be determined.

  8. Small-scale seismic inversion using surface waves extracted from noise cross correlation.

    PubMed

    Gouédard, Pierre; Roux, Philippe; Campillo, Michel

    2008-03-01

    Green's functions can be retrieved between receivers from the correlation of ambient seismic noise or with an appropriate set of randomly distributed sources. This principle is demonstrated in small-scale geophysics using noise sources generated by human steps during a 10-min walk in the alignment of a 14-m-long accelerometer line array. The time-domain correlation of the records yields two surface wave modes extracted from the Green's function between each pair of accelerometers. A frequency-wave-number Fourier analysis yields each mode contribution and their dispersion curve. These dispersion curves are then inverted to provide the one-dimensional shear velocity of the near surface.

  9. Design methodology and results evaluation of a heating functionality in modular lab-on-chip systems

    NASA Astrophysics Data System (ADS)

    Streit, Petra; Nestler, Joerg; Shaporin, Alexey; Graunitz, Jenny; Otto, Thomas

    2018-06-01

    Lab-on-a-chip (LoC) systems offer the opportunity of fast and customized biological analyses executed at the ‘point-of-need’ without expensive lab equipment. Some biological processes need a temperature treatment. Therefore, it is important to ensure a defined and stable temperature distribution in the biosensor area. An integrated heating functionality is realized with discrete resistive heating elements including temperature measurement. The focus of this contribution is a design methodology and evaluation technique of the temperature distribution in the biosensor area with regard to the thermal-electrical behaviour of the heat sources. Furthermore, a sophisticated control of the biosensor temperature is proposed. A finite element (FE) model with one and more integrated heat sources in a polymer-based LoC system is used to investigate the impact of the number and arrangement of heating elements on the temperature distribution around the heating elements and in the biosensor area. Based on this model, various LOC systems are designed and fabricated. Electrical characterization of the heat sources and independent temperature measurements with infrared technique are performed to verify the model parameters and prove the simulation approach. The FE model and the proposed methodology is the foundation for optimization and evaluation of new designs with regard to temperature requirements of the biosensor. Furthermore, a linear dependency of the heater temperature on the electric current is demonstrated in the targeted temperature range of 20 °C to 70 °C enabling the usage of the heating functionality for biological reactions requiring a steady-state temperature up to 70 °C. The correlation between heater and biosensor area temperature is derived for a direct control through the heating current.

  10. The RMS survey: galactic distribution of massive star formation

    NASA Astrophysics Data System (ADS)

    Urquhart, J. S.; Figura, C. C.; Moore, T. J. T.; Hoare, M. G.; Lumsden, S. L.; Mottram, J. C.; Thompson, M. A.; Oudmaijer, R. D.

    2014-01-01

    We have used the well-selected sample of ˜1750 embedded, young, massive stars identified by the Red MSX Source (RMS) survey to investigate the Galactic distribution of recent massive star formation. We present molecular line observations for ˜800 sources without existing radial velocities. We describe the various methods used to assign distances extracted from the literature and solve the distance ambiguities towards approximately 200 sources located within the solar circle using archival H I data. These distances are used to calculate bolometric luminosities and estimate the survey completeness (˜2 × 104 L⊙). In total, we calculate the distance and luminosity of ˜1650 sources, one third of which are above the survey's completeness threshold. Examination of the sample's longitude, latitude, radial velocities and mid-infrared images has identified ˜120 small groups of sources, many of which are associated with well-known star formation complexes, such as G305, G333, W31, W43, W49 and W51. We compare the positional distribution of the sample with the expected locations of the spiral arms, assuming a model of the Galaxy consisting of four gaseous arms. The distribution of young massive stars in the Milky Way is spatially correlated with the spiral arms, with strong peaks in the source position and luminosity distributions at the arms' Galactocentric radii. The overall source and luminosity surface densities are both well correlated with the surface density of the molecular gas, which suggests that the massive star formation rate per unit molecular mass is approximately constant across the Galaxy. A comparison of the distribution of molecular gas and the young massive stars to that in other nearby spiral galaxies shows similar radial dependences. We estimate the total luminosity of the embedded massive star population to be ˜0.76 × 108 L⊙, 30 per cent of which is associated with the 10 most active star-forming complexes. We measure the scaleheight as a function of the Galactocentric distance and find that it increases only modestly from ˜20-30 pc between 4 and 8 kpc, but much more rapidly at larger distances.

  11. Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence

    NASA Astrophysics Data System (ADS)

    Lewis, Nicholas; Grünwald, Peter

    2018-03-01

    Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).

  12. Locating Microseism Sources Using Spurious Arrivals in Intercontinental Noise Correlations

    NASA Astrophysics Data System (ADS)

    Retailleau, Lise; Boué, Pierre; Stehly, Laurent; Campillo, Michel

    2017-10-01

    The accuracy of Green's functions retrieved from seismic noise correlations in the microseism frequency band is limited by the uneven distribution of microseism sources at the surface of the Earth. As a result, correlation functions are often biased as compared to the expected Green's functions, and they can include spurious arrivals. These spurious arrivals are seismic arrivals that are visible on the correlation and do not belong to the theoretical impulse response. In this article, we propose to use Rayleigh wave spurious arrivals detected on correlation functions computed between European and United States seismic stations to locate microseism sources in the Atlantic Ocean. We perform a slant stack on a time distance gather of correlations obtained from an array of stations that comprises a regional deployment and a distant station. The arrival times and the apparent slowness of the spurious arrivals lead to the location of their source, which is obtained through a grid search procedure. We discuss improvements in the location through this methodology as compared to classical back projection of microseism energy. This method is interesting because it only requires an array and a distant station on each side of an ocean, conditions that can be met relatively easily.

  13. GLACiAR, an Open-Source Python Tool for Simulations of Source Recovery and Completeness in Galaxy Surveys

    NASA Astrophysics Data System (ADS)

    Carrasco, D.; Trenti, M.; Mutch, S.; Oesch, P. A.

    2018-06-01

    The luminosity function is a fundamental observable for characterising how galaxies form and evolve throughout the cosmic history. One key ingredient to derive this measurement from the number counts in a survey is the characterisation of the completeness and redshift selection functions for the observations. In this paper, we present GLACiAR, an open python tool available on GitHub to estimate the completeness and selection functions in galaxy surveys. The code is tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman-break technique, but it can be applied broadly. The code generates artificial galaxies that follow Sérsic profiles with different indexes and with customisable size, redshift, and spectral energy distribution properties, adds them to input images, and measures the recovery rate. To illustrate this new software tool, we apply it to quantify the completeness and redshift selection functions for J-dropouts sources (redshift z 10 galaxies) in the Hubble Space Telescope Brightest of Reionizing Galaxies Survey. Our comparison with a previous completeness analysis on the same dataset shows overall agreement, but also highlights how different modelling assumptions for the artificial sources can impact completeness estimates.

  14. Tropical Gravity Wave Momentum Fluxes and Latent Heating Distributions

    NASA Technical Reports Server (NTRS)

    Geller, Marvin A.; Zhou, Tiehan; Love, Peter T.

    2015-01-01

    Recent satellite determinations of global distributions of absolute gravity wave (GW) momentum fluxes in the lower stratosphere show maxima over the summer subtropical continents and little evidence of GW momentum fluxes associated with the intertropical convergence zone (ITCZ). This seems to be at odds with parameterizations forGWmomentum fluxes, where the source is a function of latent heating rates, which are largest in the region of the ITCZ in terms of monthly averages. The authors have examined global distributions of atmospheric latent heating, cloud-top-pressure altitudes, and lower-stratosphere absolute GW momentum fluxes and have found that monthly averages of the lower-stratosphere GW momentum fluxes more closely resemble the monthly mean cloud-top altitudes rather than the monthly mean rates of latent heating. These regions of highest cloud-top altitudes occur when rates of latent heating are largest on the time scale of cloud growth. This, plus previously published studies, suggests that convective sources for stratospheric GW momentum fluxes, being a function of the rate of latent heating, will require either a climate model to correctly model this rate of latent heating or some ad hoc adjustments to account for shortcomings in a climate model's land-sea differences in convective latent heating.

  15. Spatiotemporal Dependency of Age-Related Changes in Brain Signal Variability

    PubMed Central

    McIntosh, A. R.; Vakorin, V.; Kovacevic, N.; Wang, H.; Diaconescu, A.; Protzner, A. B.

    2014-01-01

    Recent theoretical and empirical work has focused on the variability of network dynamics in maturation. Such variability seems to reflect the spontaneous formation and dissolution of different functional networks. We sought to extend these observations into healthy aging. Two different data sets, one EEG (total n = 48, ages 18–72) and one magnetoencephalography (n = 31, ages 20–75) were analyzed for such spatiotemporal dependency using multiscale entropy (MSE) from regional brain sources. In both data sets, the changes in MSE were timescale dependent, with higher entropy at fine scales and lower at more coarse scales with greater age. The signals were parsed further into local entropy, related to information processed within a regional source, and distributed entropy (information shared between two sources, i.e., functional connectivity). Local entropy increased for most regions, whereas the dominant change in distributed entropy was age-related reductions across hemispheres. These data further the understanding of changes in brain signal variability across the lifespan, suggesting an inverted U-shaped curve, but with an important qualifier. Unlike earlier in maturation, where the changes are more widespread, changes in adulthood show strong spatiotemporal dependence. PMID:23395850

  16. Stochastic description of geometric phase for polarized waves in random media

    NASA Astrophysics Data System (ADS)

    Boulanger, Jérémie; Le Bihan, Nicolas; Rossetto, Vincent

    2013-01-01

    We present a stochastic description of multiple scattering of polarized waves in the regime of forward scattering. In this regime, if the source is polarized, polarization survives along a few transport mean free paths, making it possible to measure an outgoing polarization distribution. We consider thin scattering media illuminated by a polarized source and compute the probability distribution function of the polarization on the exit surface. We solve the direct problem using compound Poisson processes on the rotation group SO(3) and non-commutative harmonic analysis. We obtain an exact expression for the polarization distribution which generalizes previous works and design an algorithm solving the inverse problem of estimating the scattering properties of the medium from the measured polarization distribution. This technique applies to thin disordered layers, spatially fluctuating media and multiple scattering systems and is based on the polarization but not on the signal amplitude. We suggest that it can be used as a non-invasive testing method.

  17. The energetic ion signature of an O-type neutral line in the geomagnetic tail

    NASA Technical Reports Server (NTRS)

    Martin, R. F., Jr.; Johnson, D. F.; Speiser, T. W.

    1991-01-01

    An energetic ion signature is presented which has the potential for remote sensing of an O-type neutral line embedded in a current sheet. A source plasma with a tailward flowing Kappa distribution yields a strongly non-Kappa distribution after interacting with the neutral line: sharp jumps, or ridges, occur in the velocity space distribution function f(nu-perpendicular, nu-parallel) associated with both increases and decreases in f. The jumps occur when orbits are reversed in the x-direction: a reversal causing initially earthward particles (low probability in the source distribution) to be observed results in a decrease in f, while a reversal causing initially tailward particles to be observed produces an increase in f. The reversals, and hence the jumps, occur at approximately constant values of perpendicular velocity in both the positive nu parallel and negative nu parallel half planes. The results were obtained using single particle simulations in a fixed magnetic field model.

  18. Energy & mass-charge distribution peculiarities of ion emitted from penning source

    NASA Astrophysics Data System (ADS)

    Mamedov, N. V.; Kolodko, D. V.; Sorokin, I. A.; Kanshin, I. A.; Sinelnikov, D. N.

    2017-05-01

    The optimization of hydrogen Penning sources used, in particular, in plasma chemical processing of materials and DLC deposition, is still very important. Investigations of mass-charge composition of these ion source emitted beams are particular relevant for miniature linear accelerators (neutron flux generators) nowadays. The Penning ion source energy and mass-charge ion distributions are presented. The relation between the discharge current abrupt jumps with increasing plasma density in the discharge center and increasing potential whipping (up to 50% of the anode voltage) is shown. Also the energy spectra in the discharge different modes as the pressure and anode potential functions are presented. It has been revealed that the atomic hydrogen ion concentration is about 5-10%, and it weakly depends on the pressure and the discharge current (in the investigated range from 1 to 10 mTorr and from 50 to 1000 μA) and increases with the anode voltage (up 1 to 3,5 kV).

  19. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  20. Evaluation of a Proposed Biodegradable 188Re Source for Brachytherapy Application

    PubMed Central

    Khorshidi, Abdollah; Ahmadinejad, Marjan; Hamed Hosseini, S.

    2015-01-01

    Abstract This study aimed to evaluate dosimetric characteristics based on Monte Carlo (MC) simulations for a proposed beta emitter bioglass 188Re seed for internal radiotherapy applications. The bioactive glass seed has been developed using the sol-gel technique. The simulations were performed for the seed using MC radiation transport code to investigate the dosimetric factors recommended by the AAPM Task Group 60 (TG-60). Dose distributions due to the beta and photon radiation were predicted at different radial distances surrounding the source. The dose rate in water at the reference point was calculated to be 7.43 ± 0.5 cGy/h/μCi. The dosimetric factors consisting of the reference point dose rate, D(r0,θ0), the radial dose function, g(r), the 2-dimensional anisotropy function, F(r,θ), the 1-dimensional anisotropy function, φan(r), and the R90 quantity were estimated and compared with several available beta-emitting sources. The element 188Re incorporated in bioactive glasses produced by the sol-gel technique provides a suitable solution for producing new materials for seed implants applied to brachytherapy applications in prostate and liver cancers treatment. Dose distribution of 188Re seed was greater isotropic than other commercially attainable encapsulated seeds, since it has no end weld to attenuate radiation. The beta radiation-emitting 188Re source provides high doses of local radiation to the tumor tissue and the short range of the beta particles limit damage to the adjacent normal tissue. PMID:26181543

  1. Do gamma-ray burst sources repeat?

    NASA Technical Reports Server (NTRS)

    Meegan, C. A.; Hartmann, D. H.; Brainerd, J. J.; Briggs, M.; Paciesas, W. S.; Pendleton, G.; Kouveliotou, C.; Fishman, G.; Blumenthal, G.; Brock, M.

    1994-01-01

    The demonstration of repeated gamma-ray bursts from an individual source would severely constrain burst source models. Recent reports of evidence for repetition in the first BATSE burst catalog have generated renewed interest in this issue. Here, we analyze the angular distribution of 585 bursts of the second BATSE catalog (Meegan et al. 1994). We search for evidence of burst recurrence using the nearest and farthest neighbor statistic ad the two-point angular correlation function. We find the data to be consistent with the hypothesis that burst sources do not repeat; however, a repeater fraction of up to about 20% of the bursts cannot be excluded.

  2. Collective odor source estimation and search in time-variant airflow environments using mobile robots.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.

  3. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    PubMed Central

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  4. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    NASA Astrophysics Data System (ADS)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  5. Towards a global-scale ambient noise cross-correlation data base

    NASA Astrophysics Data System (ADS)

    Ermert, Laura; Fichtner, Andreas; Sleeman, Reinoud

    2014-05-01

    We aim to obtain a global-scale data base of ambient seismic noise correlations. This database - to be made publicly available at ORFEUS - will enable us to study the distribution of microseismic and hum sources, and to perform multi-scale full waveform inversion for crustal and mantle structure. Ambient noise tomography has developed into a standard technique. According to theory, cross-correlations equal inter-station Green's functions only if the wave field is equipartitioned or the sources are isotropically distributed. In an attempt to circumvent these assumptions, we aim to investigate possibilities to directly model noise cross-correlations and invert for their sources using adjoint techniques. A data base containing correlations of 'gently' preprocessed noise, excluding preprocessing steps which are explicitly taken to reduce the influence of a non-isotropic source distribution like spectral whitening, is a key ingredient in this undertaking. Raw data are acquired from IRIS/FDSN and ORFEUS. We preprocess and correlate the time series using a tool based on the Python package Obspy which is run in parallel on a cluster of the Swiss National Supercomputing Centre. Correlation is done in two ways: Besides the classical cross-correlation function, the phase cross-correlation is calculated, which is an amplitude-independent measure of waveform similarity and therefore insensitive to high-energy events. Besides linear stacks of these correlations, instantaneous phase stacks are calculated which can be applied as optional weight, enhancing coherent portions of the traces and facilitating the emergence of a meaningful signal. The _STS1 virtual network by IRIS contains about 250 globally distributed stations, several of which have been operating for more than 20 years. It is the first data collection we will use for correlations in the hum frequency range, as the STS-1 instrument response is flat in the largest part of the period range where hum is observed, up to a period of about 300 seconds. Thus they provide us with the best-suited measurements for hum.

  6. Systematic Variability of the He+ Pickup Ion Velocity Distribution Function Observed with SOHO/CELIAS/CTOF

    NASA Astrophysics Data System (ADS)

    Taut, A.; Drews, C.; Berger, L.; Wimmer-Schweingruber, R. F.

    2015-12-01

    The 1D Velocity Distribution Function (VDF) of He+ pickup ions shows two distinct populations that reflect the sources of these ions. The highly suprathermal population is the result of the ionization and pickup of almost resting interstellar neutrals that are injected into the solar wind as a highly anisotropic torus distribution. The nearly thermalized population is centered around the solar wind bulk speed and is mainly attributed to inner-source pickup ions that originate in the inner heliosphere. It is generally believed that the initial torus distribution of interstellar pickup ions is rapidly isotropized by resonant wave-particle interactions, but recent observations by Drews et al. (2015) of a torus-like VDF strongly limit this isotropization. This in turn means that more observational data is needed to further characterize the kinetic behavior of pickup ions. In this study we use data from the Charge-Time-Of-Flight sensor on-board SOHO. As this sensor offers unrivaled counting statistics for He+ together with a sufficient mass-per-charge resolution it is well-suited for investigating the He+ VDF on comparatively short timescales. We combine this data with the high resolution magnetic field data from WIND via an extrapolation to the location of SOHO. With this combination of instruments we investigate the He+ VDF for time periods of different solar wind speeds, magnetic field directions, and wave power. We find a systematic trend of the short-term He+ VDF with these parameters. Especially by varying the considered magnetic field directions we observe a 1D projection of the anisotropic torus-like VDF. In addition, we investigate stream interaction regions and coronal mass ejections. In the latter we observe an excess of inner-source He+ that is accompanied by a significant increase of heavy pickup ion count rates. This may be linked to the as yet ill understood production mechanism of inner-source pickup ions.

  7. The Green's function in a channel with a sound-absorbing cover in the case of a uniform flow

    NASA Astrophysics Data System (ADS)

    Sobolev, A. F.

    2012-07-01

    We study the modal structure of an acoustic field of a point source as function of channel wall admittance in the case of a two-dimensional channel. The characteristic equation for determining the eigen-values corresponding to the boundary problem is studied in the form of this equation's dependence on the admittance, which varies in the entire complex plane. All modes, without exception, existing in the channel and forming the source field are classified based on the obtained topography of the characteristic equation. The expressions that describe the amplitudes and spatial distribution of the hydrodynamic modes, attenuation rate (for stable modes), or increment (for unstable modes) were obtained as functions of the wall admittance and flow velocity. It is shown that in addition to the hydrodynamic unstable modes existing downstream from the source, hydrodynamic unstable modes exist upstream from the source at any admittance. They appear only when the admittance has an elastic character. It is shown that hydrodynamic modes are induced only in the case when the source is located close to the wall or on the wall. The amplitude of these modes decreases exponentially with distance from the wall.

  8. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.

  9. InterProScan 5: genome-scale protein function classification

    PubMed Central

    Jones, Philip; Binns, David; Chang, Hsin-Yu; Fraser, Matthew; Li, Weizhong; McAnulla, Craig; McWilliam, Hamish; Maslen, John; Mitchell, Alex; Nuka, Gift; Pesseat, Sebastien; Quinn, Antony F.; Sangrador-Vegas, Amaia; Scheremetjew, Maxim; Yong, Siew-Yit; Lopez, Rodrigo; Hunter, Sarah

    2014-01-01

    Motivation: Robust large-scale sequence analysis is a major challenge in modern genomic science, where biologists are frequently trying to characterize many millions of sequences. Here, we describe a new Java-based architecture for the widely used protein function prediction software package InterProScan. Developments include improvements and additions to the outputs of the software and the complete reimplementation of the software framework, resulting in a flexible and stable system that is able to use both multiprocessor machines and/or conventional clusters to achieve scalable distributed data analysis. InterProScan is freely available for download from the EMBl-EBI FTP site and the open source code is hosted at Google Code. Availability and implementation: InterProScan is distributed via FTP at ftp://ftp.ebi.ac.uk/pub/software/unix/iprscan/5/ and the source code is available from http://code.google.com/p/interproscan/. Contact: http://www.ebi.ac.uk/support or interhelp@ebi.ac.uk or mitchell@ebi.ac.uk PMID:24451626

  10. Solar Wind Halo Formation by the Scattering of the Strahl via Direct Cluster/PEACE Observations of the 3D Velocity Distribution Function

    NASA Technical Reports Server (NTRS)

    Figueroa-Vinas, Adolfo; Gurgiolo, Chris A.; Nieves-Chinchilla, Teresa; Goldstein, Melvyn L.

    2010-01-01

    It has been suggested by a number of authors that the solar wind electron halo can be formed by the scattering of the strahl. On frequent occasions we have observed in electron angular skymaps (Phi/Theta-plots) of the electron 3D velocity distribution functions) a bursty-filament of particles connecting the strahl to the solar wind core-halo. These are seen over a very limited energy range. When the magnetic field is well off the nominal solar wind flow direction such filaments are inconsistent with any local forces and are probably the result of strong scattering. Furthermore, observations indicates that the strahl component is frequently and significantly anisotropic (Tper/Tpal approx.2). This provides a possible free energy source for the excitation of whistler waves as a possible scattering mechanism. The empirical observational evidence between the halo and the strahl suggests that the strahl population may be, at least in part, the source of the halo component.

  11. Open-source Framework for Storing and Manipulation of Plasma Chemical Reaction Data

    NASA Astrophysics Data System (ADS)

    Jenkins, T. G.; Averkin, S. N.; Cary, J. R.; Kruger, S. E.

    2017-10-01

    We present a new open-source framework for storage and manipulation of plasma chemical reaction data that has emerged from our in-house project MUNCHKIN. This framework consists of python scripts and C + + programs. It stores data in an SQL data base for fast retrieval and manipulation. For example, it is possible to fit cross-section data into most widely used analytical expressions, calculate reaction rates for Maxwellian distribution functions of colliding particles, and fit them into different analytical expressions. Another important feature of this framework is the ability to calculate transport properties based on the cross-section data and supplied distribution functions. In addition, this framework allows the export of chemical reaction descriptions in LaTeX format for ease of inclusion in scientific papers. With the help of this framework it is possible to generate corresponding VSim (Particle-In-Cell simulation code) and USim (unstructured multi-fluid code) input blocks with appropriate cross-sections.

  12. A solution of the monoenergetic neutral particle transport equation for adjacent half-spaces with anisotropic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D., E-mail: ganapol@cowboy.ame.arizona.edu; Mostacci, D.; Previti, A.

    2016-07-01

    We present highly accurate solutions to the neutral particle transport equation in a half-space. While our initial motivation was in response to a recently published solution based on Chandrasekhar's H-function, the presentation to follow has taken on a more comprehensive tone. The solution by H-functions certainly did achieved high accuracy but was limited to isotropic scattering and emission from spatially uniform and linear sources. Moreover, the overly complicated nature of the H-function approach strongly suggests that its extension to anisotropic scattering and general sources is not at all practical. For this reason, an all encompassing theory for the determination ofmore » highly precise benchmarks, including anisotropic scattering for a variety of spatial source distributions, is presented for particle transport in a half-space. We illustrate the approach via a collection of cases including tables of 7-place flux benchmarks to guide transport methods developers. The solution presented can be applied to a considerable number of one and two half-space transport problems with variable sources and represents a state-of-the-art benchmark solution.« less

  13. Activation Time of Cardiac Tissue In Response to a Linear Array of Spatial Alternating Bipolar Electrodes

    NASA Astrophysics Data System (ADS)

    Mashburn, David; Wikswo, John

    2007-11-01

    Prevailing theories about the response of the heart to high field shocks predict that local regions of high resistivity distributed throughout the heart create multiple small virtual electrodes that hyperpolarize or depolarize tissue and lead to widespread activation. This resetting of bulk tissue is responsible for the successful functioning of cardiac defibrillators. By activating cardiac tissue with regular linear arrays of spatially alternating bipolar currents, we can simulate these potentials locally. We have studied the activation time due to distributed currents in both a 1D Beeler-Reuter model and on the surface of the whole heart, varying the strength of each source and the separation between them. By comparison with activation time data from actual field shock of a whole heart in a bath, we hope to better understand these transient virtual electrodes. Our work was done on rabbit RV using florescent optical imaging and our Phased Array Stimulator for driving the 16 current sources. Our model shows that for a total absolute current delivered to a region of tissue, the entire region activates faster if above-threshold sources are more distributed.

  14. Excitation of high-frequency electromagnetic waves by energetic electrons with a loss cone distribution in a field-aligned potential drop

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.; Vinas, Adolfo F.

    1994-01-01

    The electron cyclotron maser instability (CMI) driven by momentum space anisotropy (df/dp (sub perpendicular) greater than 0) has been invoked to explain many aspects, such as the modes of propagation, harmonic emissions, and the source characteristics of the auroral kilometric radiation (AKR). Recent satellite observations of AKR sources indicate that the source regions are often imbedded within the auroral acceleration region characterized by the presence of a field-aligned potential drop. In this paper we investigate the excitation of the fundamental extraordinary mode radiation due to the accelerated electrons. The momentum space distribution of these energetic electrons is modeled by a realistic upward loss cone as modified by the presence of a parallel potential drop below the observation point. On the basis of linear growth rate calculations we present the emission characteristics, such as the frequency spectrum and the emission angular distribution as functions of the plasma parameters. We will discuss the implication of our results on the generation of the AKR from the edges of the auroral density cavities.

  15. Thermal Conductivity of Single-Walled Carbon Nanotube with Internal Heat Source Studied by Molecular Dynamics Simulation

    NASA Astrophysics Data System (ADS)

    Li, Yuan-Wei; Cao, Bing-Yang

    2013-12-01

    The thermal conductivity of (5, 5) single-walled carbon nanotubes (SWNTs) with an internal heat source is investigated by using nonequilibrium molecular dynamics (NEMD) simulation incorporating uniform heat source and heat source-and-sink schemes. Compared with SWNTs without an internal heat source, i.e., by a fixed-temperature difference scheme, the thermal conductivity of SWNTs with an internal heat source is much lower, by as much as half in some cases, though it still increases with an increase of the tube length. Based on the theory of phonon dynamics, a function called the phonon free path distribution is defined to develop a simple one-dimensional heat conduction model considering an internal heat source, which can explain diffusive-ballistic heat transport in carbon nanotubes well.

  16. Determining X-ray source intensity and confidence bounds in crowded fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu

    We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less

  17. Towards an accurate real-time locator of infrasonic sources

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability distributions of the phase arrival time picks. To illustrate the improvements in both computation time and location accuracy achieved, we compare location results for the new algorithms, previously published BISL-type algorithms and the least-squares location technique. This comparison is provided via a case study of different typical spatial data distributions and statistical experiment using the database of 36 ground-truth explosions from the Utah Test and Training Range (UTTR) recorded during the US summer season at USArray transportable seismic stations when they were near the site between 2006 and 2008.

  18. Light scattering regimes along the optical axis in turbid media

    NASA Astrophysics Data System (ADS)

    Campbell, S. D.; O'Connell, A. K.; Menon, S.; Su, Q.; Grobe, R.

    2006-12-01

    We inject an angularly collimated laser beam into a scattering medium of a nondairy creamer-water solution and examine the distribution of the scattered light along the optical axis as a function of the source-detector spacing. The experimental and simulated data obtained from a Monte Carlo simulation suggest four regimes characterizing the transition from unscattered to diffusive light. We compare the data also with theoretical predictions based on a first-order scattering theory for regions close to the source, and with diffusionlike theories for larger source-detector spacings. We demonstrate the impact of the measurement process and the effect of the unavoidable absorption of photons by the detection fiber on the light distribution inside the medium. We show that the range of validity of these theories can depend on the experimental parameters such as the diameter and acceptance angle of the detection fiber.

  19. Evaluation of probabilistic forecasts with the scoringRules package

    NASA Astrophysics Data System (ADS)

    Jordan, Alexander; Krüger, Fabian; Lerch, Sebastian

    2017-04-01

    Over the last decades probabilistic forecasts in the form of predictive distributions have become popular in many scientific disciplines. With the proliferation of probabilistic models arises the need for decision-theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way in order to better understand sources of prediction errors and to improve the models. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. In coherence with decision-theoretical principles they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This contribution presents the software package scoringRules for the statistical programming language R, which provides functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. For univariate variables, two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, ensemble weather forecasts take this form. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices. Recent developments include the addition of scoring rules to evaluate multivariate forecast distributions. The use of the scoringRules package is illustrated in an example on post-processing ensemble forecasts of temperature.

  20. Galaxy and Mass Assembly (GAMA): Exploring the WISE Web in G12

    NASA Astrophysics Data System (ADS)

    Jarrett, T. H.; Cluver, M. E.; Magoulas, C.; Bilicki, M.; Alpaslan, M.; Bland-Hawthorn, J.; Brough, S.; Brown, M. J. I.; Croom, S.; Driver, S.; Holwerda, B. W.; Hopkins, A. M.; Loveday, J.; Norberg, P.; Peacock, J. A.; Popescu, C. C.; Sadler, E. M.; Taylor, E. N.; Tuffs, R. J.; Wang, L.

    2017-02-01

    We present an analysis of the mid-infrared Wide-field Infrared Survey Explorer (WISE) sources seen within the equatorial GAMA G12 field, located in the North Galactic Cap. Our motivation is to study and characterize the behavior of WISE source populations in anticipation of the deep multiwavelength surveys that will define the next decade, with the principal science goal of mapping the 3D large-scale structures and determining the global physical attributes of the host galaxies. In combination with cosmological redshifts, we identify galaxies from their WISE W1 (3.4 μm) resolved emission, and we also perform a star-galaxy separation using apparent magnitude, colors, and statistical modeling of star counts. The resulting galaxy catalog has ≃590,000 sources in 60 deg2, reaching a W1 5σ depth of 31 μJy. At the faint end, where redshifts are not available, we employ a luminosity function analysis to show that approximately 27% of all WISE extragalactic sources to a limit of 17.5 mag (31 μJy) are at high redshift, z> 1. The spatial distribution is investigated using two-point correlation functions and a 3D source density characterization at 5 Mpc and 20 Mpc scales. For angular distributions, we find that brighter and more massive sources are strongly clustered relative to fainter sources with lower mass; likewise, based on WISE colors, spheroidal galaxies have the strongest clustering, while late-type disk galaxies have the lowest clustering amplitudes. In three dimensions, we find a number of distinct groupings, often bridged by filaments and superstructures. Using special visualization tools, we map these structures, exploring how clustering may play a role with stellar mass and galaxy type.

  1. Luminosity and surface brightness distribution of K-band galaxies from the UKIDSS Large Area Survey

    NASA Astrophysics Data System (ADS)

    Smith, Anthony J.; Loveday, Jon; Cross, Nicholas J. G.

    2009-08-01

    We present luminosity and surface-brightness distributions of 40111 galaxies with K-band photometry from the United Kingdom Infrared Telescope (UKIRT) Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS), Data Release 3 and optical photometry from Data Release 5 of the Sloan Digital Sky Survey (SDSS). Various features and limitations of the new UKIDSS data are examined, such as a problem affecting Petrosian magnitudes of extended sources. Selection limits in K- and r-band magnitude, K-band surface brightness and K-band radius are included explicitly in the 1/Vmax estimate of the space density and luminosity function. The bivariate brightness distribution in K-band absolute magnitude and surface brightness is presented and found to display a clear luminosity-surface brightness correlation that flattens at high luminosity and broadens at low luminosity, consistent with similar analyses at optical wavelengths. Best-fitting Schechter function parameters for the K-band luminosity function are found to be M* - 5 logh = -23.19 +/- 0.04,α = -0.81 +/- 0.04 and φ* = (0.0166 +/- 0.0008)h3Mpc-3, although the Schechter function provides a poor fit to the data at high and low luminosity, while the luminosity density in the K band is found to be j = (6.305 +/- 0.067) × 108LsolarhMpc-3. However, we caution that there are various known sources of incompleteness and uncertainty in our results. Using mass-to-light ratios determined from the optical colours, we estimate the stellar mass function, finding good agreement with previous results. Possible improvements are discussed that could be implemented when extending this analysis to the full LAS.

  2. A First Estimate of the X-Ray Binary Frequency as a Function of Star Cluster Mass in a Single Galactic System

    NASA Astrophysics Data System (ADS)

    Clark, D. M.; Eikenberry, S. S.; Brandl, B. R.; Wilson, J. C.; Carson, J. C.; Henderson, C. P.; Hayward, T. L.; Barry, D. J.; Ptak, A. F.; Colbert, E. J. M.

    2008-05-01

    We use the previously identified 15 infrared star cluster counterparts to X-ray point sources in the interacting galaxies NGC 4038/4039 (the Antennae) to study the relationship between total cluster mass and X-ray binary number. This significant population of X-Ray/IR associations allows us to perform, for the first time, a statistical study of X-ray point sources and their environments. We define a quantity, η, relating the fraction of X-ray sources per unit mass as a function of cluster mass in the Antennae. We compute cluster mass by fitting spectral evolutionary models to Ks luminosity. Considering that this method depends on cluster age, we use four different age distributions to explore the effects of cluster age on the value of η and find it varies by less than a factor of 4. We find a mean value of η for these different distributions of η = 1.7 × 10-8 M-1⊙ with ση = 1.2 × 10-8 M-1⊙. Performing a χ2 test, we demonstrate η could exhibit a positive slope, but that it depends on the assumed distribution in cluster ages. While the estimated uncertainties in η are factors of a few, we believe this is the first estimate made of this quantity to "order of magnitude" accuracy. We also compare our findings to theoretical models of open and globular cluster evolution, incorporating the X-ray binary fraction per cluster.

  3. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  4. Towards a better comprehension of plasma formation and heating in high performances electron cyclotron resonance ion sources (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mascali, D.; Gammino, S.; Celona, L.

    2012-02-15

    Further improvements of electron cyclotron resonance ion sources (ECRIS) output currents and average charge state require a deep understanding of electron and ion dynamics in the plasma. This paper will discuss the most recent advances about modeling of non-classical evidences like the sensitivity of electron energy distribution function to the magnetic field detuning, the influence of plasma turbulences on electron heating and ion confinement, the coupling between electron and ion dynamics. All these issues have in common the non-homogeneous distribution of the plasma inside the source: the abrupt density drop at the resonance layer regulates the heating regimes (from collectivemore » to turbulent), the beam formation mechanism and emittance. Possible means to boost the performances of future ECRIS will be proposed. In particular, the use of Bernstein waves, in preliminary experiments performed at Laboratori Nazionali del Sud (LNS) on MDIS (microwave discharge ion sources)-type sources, has permitted to sustain largely overdense plasmas enhancing the warm electron temperature, which will make possible in principle the construction of sources for high intensity multicharged ions beams with simplified magnetic structures.« less

  5. THE REDSHIFT DISTRIBUTION OF GIANT ARCS IN THE SLOAN GIANT ARCS SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayliss, Matthew B.; Gladders, Michael D.; Koester, Benjamin P.

    2011-01-20

    We measure the redshift distribution of a sample of 28 giant arcs discovered as a part of the Sloan Giant Arcs Survey. Gemini/GMOS-North spectroscopy provides precise redshifts for 24 arcs, and 'redshift desert' constrains for the remaining 4 arcs. This is a direct measurement of the redshift distribution of a uniformly selected sample of bright giant arcs, which is an observable that can be used to inform efforts to predict giant arc statistics. Our primary giant arc sample has a median redshift z = 1.821 and nearly two-thirds of the arcs, 64%, are sources at z {approx}> 1.4, indicating thatmore » the population of background sources that are strongly lensed into bright giant arcs resides primarily at high redshift. We also analyze the distribution of redshifts for 19 secondary strongly lensed background sources that are not visually apparent in Sloan Digital Sky Survey imaging, but were identified in deeper follow-up imaging of the lensing cluster fields. Our redshift sample for the secondary sources is not spectroscopically complete, but combining it with our primary giant arc sample suggests that a large fraction of all background galaxies that are strongly lensed by foreground clusters reside at z {approx}> 1.4. Kolmogorov-Smirnov tests indicate that our well-selected, spectroscopically complete primary giant arc redshift sample can be reproduced with a model distribution that is constructed from a combination of results from studies of strong-lensing clusters in numerical simulations and observational constraints on the galaxy luminosity function.« less

  6. Spatial patterns of a tropical tree species growing under an eucalyptus plantation in South-East Brazil.

    PubMed

    Higuchi, P; Silva, A C; Louzada, J N C; Machado, E L M

    2010-05-01

    The objectives of this study were to evaluate the influence of propagules source and the implication of tree size class on the spatial pattern of Xylopia brasiliensis Spreng. individuals growing under the canopy of an experimental plantation of eucalyptus. To this end, all individuals of Xylopia brasiliensis with diameter at soil height (dsh) > 1 cm were mapped in the understory of a 3.16 ha Eucalyptus spp. and Corymbia spp. plantation, located in the municipality of Lavras, SE Brazil. The largest nearby mature tree of X. brasiliensis was considered as the propagules source. Linear regressions were used to assess the influence of the distance of propagules source on the population parameters (density, basal area and height). The spatial pattern of trees was assessed through the Ripley K function. The overall pattern showed that the propagules source distance had strong influence over spatial distribution of trees, mainly the small ones, indicating that the closer the distance from the propagules source, the higher the tree density and the lower the mean tree height. The population showed different spatial distribution patterns according to the spatial scale and diameter class considered. While small trees tended to be aggregated up to around 80 m, the largest individuals were randomly distributed in the area. A plausible explanation for observed patterns might be limited seed rain and intra-population competition.

  7. On a two-phase Hele-Shaw problem with a time-dependent gap and distributions of sinks and sources

    NASA Astrophysics Data System (ADS)

    Savina, Tatiana; Akinyemi, Lanre; Savin, Avital

    2018-01-01

    A two-phase Hele-Shaw problem with a time-dependent gap describes the evolution of the interface, which separates two fluids sandwiched between two plates. The fluids have different viscosities. In addition to the change in the gap width of the Hele-Shaw cell, the interface is driven by the presence of some special distributions of sinks and sources located in both the interior and exterior domains. The effect of surface tension is neglected. Using the Schwarz function approach, we give examples of exact solutions when the interface belongs to a certain family of algebraic curves and the curves do not form cusps. The family of curves are defined by the initial shape of the free boundary.

  8. Bayesian probabilistic approach for inverse source determination from limited and noisy chemical or biological sensor concentration measurements

    NASA Astrophysics Data System (ADS)

    Yee, Eugene

    2007-04-01

    Although a great deal of research effort has been focused on the forward prediction of the dispersion of contaminants (e.g., chemical and biological warfare agents) released into the turbulent atmosphere, much less work has been directed toward the inverse prediction of agent source location and strength from the measured concentration, even though the importance of this problem for a number of practical applications is obvious. In general, the inverse problem of source reconstruction is ill-posed and unsolvable without additional information. It is demonstrated that a Bayesian probabilistic inferential framework provides a natural and logically consistent method for source reconstruction from a limited number of noisy concentration data. In particular, the Bayesian approach permits one to incorporate prior knowledge about the source as well as additional information regarding both model and data errors. The latter enables a rigorous determination of the uncertainty in the inference of the source parameters (e.g., spatial location, emission rate, release time, etc.), hence extending the potential of the methodology as a tool for quantitative source reconstruction. A model (or, source-receptor relationship) that relates the source distribution to the concentration data measured by a number of sensors is formulated, and Bayesian probability theory is used to derive the posterior probability density function of the source parameters. A computationally efficient methodology for determination of the likelihood function for the problem, based on an adjoint representation of the source-receptor relationship, is described. Furthermore, we describe the application of efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) for sampling from the posterior distribution of the source parameters, the latter of which is required to undertake the Bayesian computation. The Bayesian inferential methodology for source reconstruction is validated against real dispersion data for two cases involving contaminant dispersion in highly disturbed flows over urban and complex environments where the idealizations of horizontal homogeneity and/or temporal stationarity in the flow cannot be applied to simplify the problem. Furthermore, the methodology is applied to the case of reconstruction of multiple sources.

  9. Optical properties (bidirectional reflectance distribution function) of shot fabric.

    PubMed

    Lu, R; Koenderink, J J; Kappers, A M

    2000-11-01

    To study the optical properties of materials, one needs a complete set of the angular distribution functions of surface scattering from the materials. Here we present a convenient method for collecting a large set of bidirectional reflectance distribution function (BRDF) samples in the hemispherical scattering space. Material samples are wrapped around a right-circular cylinder and irradiated by a parallel light source, and the scattered radiance is collected by a digital camera. We tilted the cylinder around its center to collect the BRDF samples outside the plane of incidence. This method can be used with materials that have isotropic and anisotropic scattering properties. We demonstrate this method in a detailed investigation of shot fabrics. The warps and the fillings of shot fabrics are dyed different colors so that the fabric appears to change color at different viewing angles. These color-changing characteristics are found to be related to the physical and geometrical structure of shot fabric. Our study reveals that the color-changing property of shot fabrics is due mainly to an occlusion effect.

  10. Local structure analysis on (La,Ba)(Ga,Mg)O3-δ by the pair distribution function method using a neutron source and density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Kitamura, Naoto; Vogel, Sven C.; Idemoto, Yasushi

    2013-06-01

    In this work, we focused on La0.95Ba0.05Ga0.8Mg0.2O3-δ with the perovskite structure, and investigated the local structure around the oxygen vacancy by pair distribution function (PDF) method and density functional theory (DFT) calculation. By comparing the G(r) simulated based on the DFT calculation and the experimentally-observed G(r), it was suggested that the oxygen vacancy was trapped by Ba2+ at the La3+ site at least at room temperature. Such a defect association may be one of the reasons why the La0.95Ba0.05Ga0.8Mg0.2O3-δ showed lower oxide-ion conductivity than (La,Sr)(Ga,Mg)O3-δ which was widely-used as an electrolyte of the solid oxide fuel cell.

  11. Numerical convergence and validation of the DIMP inverse particle transport model

    DOE PAGES

    Nelson, Noel; Azmy, Yousry

    2017-09-01

    The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less

  12. Neutron Detector Signal Processing to Calculate the Effective Neutron Multiplication Factor of Subcritical Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, Alberto; Gohar, Yousry

    2016-06-01

    This report describes different methodologies to calculate the effective neutron multiplication factor of subcritical assemblies by processing the neutron detector signals using MATLAB scripts. The subcritical assembly can be driven either by a spontaneous fission neutron source (e.g. californium) or by a neutron source generated from the interactions of accelerated particles with target materials. In the latter case, when the particle accelerator operates in a pulsed mode, the signals are typically stored into two files. One file contains the time when neutron reactions occur and the other contains the times when the neutron pulses start. In both files, the timemore » is given by an integer representing the number of time bins since the start of the counting. These signal files are used to construct the neutron count distribution from a single neutron pulse. The built-in functions of MATLAB are used to calculate the effective neutron multiplication factor through the application of the prompt decay fitting or the area method to the neutron count distribution. If the subcritical assembly is driven by a spontaneous fission neutron source, then the effective multiplication factor can be evaluated either using the prompt neutron decay constant obtained from Rossi or Feynman distributions or the Modified Source Multiplication (MSM) method.« less

  13. Characterization of Low-Energy Photon-Emitting Brachytherapy Sources with Modified Strengths for Applications in Focal Therapy

    NASA Astrophysics Data System (ADS)

    Reed, Joshua L.

    Permanent implants of low-energy photon-emitting brachytherapy sources are used to treat a variety of cancers. Individual source models must be separately characterized due to their unique geometry, materials, and radionuclides, which all influence their dose distributions. Thermoluminescent dosimeters (TLDs) are often used for dose measurements around low-energy photon-emitting brachytherapy sources. TLDs are typically calibrated with higher energy sources such as 60Co, which requires a correction for the change in the response of the TLDs as a function of photon energy. These corrections have historically been based on TLD response to x ray bremsstrahlung spectra instead of to brachytherapy sources themselves. This work determined the TLD intrinsic energy dependence for 125I and 103Pd sources relative to 60Co, which allows for correction of TLD measurements of brachytherapy sources with factors specific to their energy spectra. Traditional brachytherapy sources contain mobile internal components and large amounts of high-Z material such as radio-opaque markers and titanium encapsulations. These all contribute to perturbations and uncertainties in the dose distribution around the source. The CivaString is a new elongated 103Pd brachytherapy source with a fixed internal geometry, polymer encapsulation, and lengths ranging from 1 to 6 cm, which offers advantages over traditional source designs. This work characterized the CivaString source and the results facilitated the formal approval of this source for use in clinical treatments. Additionally, the accuracy of a superposition technique for dose calculation around the sources with lengths >1 cm was verified. Advances in diagnostic techniques are paving the way for focal brachytherapy in which the dose is intentionally modulated throughout the target volume to focus on subvolumes that contain cancer cells. Brachytherapy sources with variable longitudinal strength (VLS) are a promising candidate for use in focal brachytherapy treatments given their customizable activity distributions, although they are not yet commercially available. This work characterized five prototype VLS sources, developed methods for clinical calibration and verification of these sources, and developed an analytical dose calculation algorithm that scales with both source length and VLS.

  14. Improvements in surface singularity analysis and design methods. [applicable to airfoils

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1979-01-01

    The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.

  15. The Three Sources of Gas in the Comae of Comets

    NASA Technical Reports Server (NTRS)

    Huebner, W. F.

    1995-01-01

    Surface water ice on a comet nucleus is the major source of coma gas. Dust, entrained by coma gas, fragments and vaporizes, forming a second, distributed source of coma gas constituents. Ice species more volatile than water ice below the surface of the nucleus are a third source of coma gas. Vapors from these ices, produced by heat penetrating into the nucleus, diffuse through pores outward into the coma. The second and third sources provide minor, but sometimes easily detectible, gaseous species in the coma. We present mixing ratios of observed minor coma constituents relative to water vapor as a function of heliocentric and cometocentric distances and compare these ratios with model predictions, assuming the sources of the minor species are either coma dust or volatile ices in the nucleus.

  16. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  17. Energetic distributions of interface states Dit(phi sub s) of MOS transistors in extension of Kuhn's quasistatic C(V)-method

    NASA Astrophysics Data System (ADS)

    Krautschneider, W.; Wagemann, H. G.

    1983-10-01

    Kuhn's quasi-static C(V)-method has been extended to MOS transistors by considering the capacitances of the source and drain p-n junctions additionally to the MOS varactor circuit model. The width of the space charge layers w(phi sub s) is calculated as a function of the surface potential phi sub s and applied to the MOS capacitance as a function of the gate voltage. Capacitance behavior for different channel length is presented as a model and compared to measurement results and evaluations of energetic distributions of interface states Dit(phi sub s) for MOS transistor and MOS varactor on the same chip.

  18. Consistent description of kinetic equation with triangle anomaly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pu Shi; Gao Jianhua; Wang Qun

    2011-05-01

    We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less

  19. Comparison Study of Three Different Image Reconstruction Algorithms for MAT-MI

    PubMed Central

    Xia, Rongmin; Li, Xu

    2010-01-01

    We report a theoretical study on magnetoacoustic tomography with magnetic induction (MAT-MI). According to the description of signal generation mechanism using Green’s function, the acoustic dipole model was proposed to describe acoustic source excited by the Lorentz force. Using Green’s function, three kinds of reconstruction algorithms based on different models of acoustic source (potential energy, vectored acoustic pressure, and divergence of Lorenz force) are deduced and compared, and corresponding numerical simulations were conducted to compare these three kinds of reconstruction algorithms. The computer simulation results indicate that the potential energy method and vectored pressure method can directly reconstruct the Lorentz force distribution and give a more accurate reconstruction of electrical conductivity. PMID:19846363

  20. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  1. Method and system for determining depth distribution of radiation-emitting material located in a source medium and radiation detector system for use therein

    DOEpatents

    Benke, Roland R.; Kearfott, Kimberlee J.; McGregor, Douglas S.

    2004-04-27

    A radiation detector system includes detectors having different properties (sensitivity, energy resolution) which are combined so that excellent spectral information may be obtained along with good determinations of the radiation field as a function of position.

  2. 14 CFR 23.1309 - Equipment, systems, and installations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... chapter and that requires a power supply is an “essential load” on the power supply. The power sources and the system must be able to supply the following power loads in probable operating combinations and for probable durations: (1) Loads connected to the power distribution system with the system functioning...

  3. DISTRIBUTION OF PESTICIDES AND POLYCYCLIC AROMATIC HYDROCARBONS IN HOUSE DUST AS A FUNCTION OF PARTICLE SIZE

    EPA Science Inventory

    House dust is a repository for environmental pollutants that may accumulate indoors from both internal and external sources over long periods of time. Dust and tracked-in soil accumulate most efficiently in carpets, and the pollutants associated with it may present an exposure...

  4. 78 FR 33691 - Distribution of Source Material to Exempt Persons and to General Licensees and Revision of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... Distribution of Source Material to Exempt Persons and to General Licensees and Revision of General License and..., Distribution of Source Material to Exempt Persons and to General Licensees and Revision of General License and Exemptions (Distribution of Source Material Rule). The Distribution of Source Material Rule amended the NRC's...

  5. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    NASA Astrophysics Data System (ADS)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  6. Statistics of the fractional polarization of extragalactic dusty sources in Planck HFI maps

    NASA Astrophysics Data System (ADS)

    Bonavera, L.; González-Nuevo, J.; De Marco, B.; Argüeso, F.; Toffolatti, L.

    2017-11-01

    We estimate the average fractional polarization at 143, 217 and 353 GHz of a sample of 4697 extragalactic dusty sources by applying stacking technique. The sample is selected from the second version of the Planck Catalogue of Compact Sources at 857 GHz, avoiding the region inside the Planck Galactic mask (fsky ∼ 60 per cent). We recover values for the mean fractional polarization at 217 and 353 GHz of (3.10 ± 0.75) per cent and (3.65 ± 0.66) per cent, respectively, whereas at 143 GHz we give a tentative value of (3.52 ± 2.48) per cent. We discuss the possible origin of the measured polarization, comparing our new estimates with those previously obtained from a sample of radio sources. We test different distribution functions and we conclude that the fractional polarization of dusty sources is well described by a log-normal distribution, as determined in the radio band studies. For this distribution we estimate μ217GHz = 0.3 ± 0.5 [that would correspond to a median fractional polarization of Πmed = (1.3 ± 0.7) per cent] and μ353GHz = 0.7 ± 0.4 (Πmed = (2.0 ± 0.8) per cent), σ217GHz = 1.3 ± 0.2 and σ353GHz = 1.1 ± 0.2. With these values we estimate the source number counts in polarization and the contribution given by these sources to the Cosmic Microwave Background B-mode angular power spectrum at 217, 353, 600 and 800 GHz. We conclude that extragalactic dusty sources might be an important contaminant for the primordial B-mode at frequencies >217 GHz.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  8. Effect of high energy electrons on H⁻ production and destruction in a high current DC negative ion source for cyclotron.

    PubMed

    Onai, M; Etoh, H; Aoki, Y; Shibata, T; Mattei, S; Fujita, S; Hatayama, A; Lettry, J

    2016-02-01

    Recently, a filament driven multi-cusp negative ion source has been developed for proton cyclotrons in medical applications. In this study, numerical modeling of the filament arc-discharge source plasma has been done with kinetic modeling of electrons in the ion source plasmas by the multi-cusp arc-discharge code and zero dimensional rate equations for hydrogen molecules and negative ions. In this paper, main focus is placed on the effects of the arc-discharge power on the electron energy distribution function and the resultant H(-) production. The modelling results reasonably explains the dependence of the H(-) extraction current on the arc-discharge power in the experiments.

  9. Systems and methods for distributing power using photovoltaic resources and a shifting battery system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mammoli, Andrea A.; Lavrova, Olga; Arellano, Brian

    The present invention is an apparatus and method for delivering energy using a renewable resource. The method includes providing a photovoltaic energy source and applying energy storage to the photovoltaic energy source via a battery storage unit. The energy output from the photovoltaic energy source and the battery system is controlled using a battery control system. The battery control system predicts peak load, develops a schedule that includes when to begin discharging power and when to stop discharging power, shifts power to the battery storage unit when excess power is available, and prioritizes the functionality of the battery storage unitmore » and the photovoltaic energy source.« less

  10. Do gamma-ray burst sources repeat?

    NASA Technical Reports Server (NTRS)

    Meegan, Charles A.; Hartmann, Dieter H.; Brainerd, J. J.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey; Kouveliotou, Chryssa; Fishman, Gerald; Blumenthal, George; Brock, Martin

    1995-01-01

    The demonstration of repeated gamma-ray bursts from an individual source would severely constrain burst source models. Recent reports (Quashnock and Lamb, 1993; Wang and Lingenfelter, 1993) of evidence for repetition in the first BATSE burst catalog have generated renewed interest in this issue. Here, we analyze the angular distribution of 585 bursts of the second BATSE catalog (Meegan et al., 1994). We search for evidence of burst recurrence using the nearest and farthest neighbor statistic and the two-point angular correlation function. We find the data to be consistent with the hypothesis that burst sources do not repeat; however, a repeater fraction of up to about 20% of the observed bursts cannot be excluded.

  11. Intelligent Load Manager (LOADMAN): Application of Expert System Technology to Load Management Problems in Power Generation and Distribution Systems

    DTIC Science & Technology

    1988-08-10

    addrsesed to it, the wall-receptacle module energizes a relay. Modules can be built to use a triac instead and have the capacity to increase or decrease... modulated by other constraints for a safe, functional ana effective power distribution system. 2.2.3 BackuR Equipment Alternate power sources are...environments have limited sensor capability and no remote control capability. However, future enhancements to current equipment, such as frequency- modulated

  12. Acetylcholine Receptors in Model Membranes: Structure/Function Correlates.

    DTIC Science & Technology

    1985-12-01

    8217-ASSIFICAT1CN’r.OlrNC7-..OINC 6. 04STPl3U7lCh STATE)III (a. -,41. Revlon) Approved for public release;, distribution unlimited D T 18. SUPPLENENTARY NOTES *Annual...of California, San Diego B-019 La Jolla, California 92093 Approved for public release; distribution unlimited The findings in this report are not to be...electrodes E-255 and E 206 (In Vivo Metric Systems Metric Systems, Healdsburg, CA). DC source ( Omnical 2001, WPI Instruments, New Haven, CT). RACAL

  13. Cyclotron maser instability and its applications

    NASA Astrophysics Data System (ADS)

    Wu, C. S.

    The possible application of cyclotron maser theory to a variety of radio sources is considered, with special attention given to the theory of auroral kilometric radiation (AKR) of Wu and Lee (1979). The AKR model assumes a loss-cone distribution function for the reflected electrons, along with the depletion of low-energy electrons by the parallel electric field. Other topics considered include fundamental AKR, second-harmonic AKR, the generation of Z-mode radiation, and the application of maser instability to other sources than AKR.

  14. Metal and Nutrient Distribution and Fractionation in Managed Urban Watersheds Across the US Southwest

    NASA Astrophysics Data System (ADS)

    Papelis, C.; Williams, A. C.; Boettcher, T. M.

    2008-12-01

    Metals, metalloids, and nutrients are common contaminants of concern in arid and semi-arid watersheds in the Southwestern U.S. Because of the dramatic population growth in this part of the U.S., the potential for contamination of urban watersheds has also increased over the last few decades. Streams in urban watersheds receive storm water, urban runoff, shallow groundwater, and treated wastewater, among other sources. In addition, urban watersheds are often heavily managed to mitigate flood events and sediment- related impacts. Sediment transport can have a profound effect on the water quality of affected bodies of water. However, differences in geology, hydrogeology, and land use may have dramatic effects on the distribution of nutrients and metals in different urban watersheds. To test these effects, aqueous and sediment samples were collected above and below erosion control and other structures along two heavily managed urban watersheds, namely the Las Vegas Wash in the Las Vegas Valley Watershed, Nevada, and the Rio Salado (Salt River) in the Phoenix Metropolitan Area, Arizona. The construction of such control structures has the potential to alter the distribution of metals and metalloids in bodies of water used by wildlife. In this study, all sediments were characterized by particle size distribution, specific surface area, mineralogical composition, and scanning electron microscopy. The results of total arsenic, boron, and phosphorus extractions will be discussed, as a function of sediment characteristics. Significant differences exist between the two U.S. Southwest watersheds studied, including land use, water sources, sediment characteristics, nutrient and metal distribution, and overall system complexity. These differences lead to significant variations in metalloid and nutrient distributions in the two watersheds. Differences and similarities in the two systems will be explained as a function of sediment characteristics and watershed properties.

  15. The rates and time-delay distribution of multiply imaged supernovae behind lensing clusters

    NASA Astrophysics Data System (ADS)

    Li, Xue; Hjorth, Jens; Richard, Johan

    2012-11-01

    Time delays of gravitationally lensed sources can be used to constrain the mass model of a deflector and determine cosmological parameters. We here present an analysis of the time-delay distribution of multiply imaged sources behind 17 strong lensing galaxy clusters with well-calibrated mass models. We find that for time delays less than 1000 days, at z = 3.0, their logarithmic probability distribution functions are well represented by P(log Δt) = 5.3 × 10-4Δttilde beta/M2502tilde beta, with tilde beta = 0.77, where M250 is the projected cluster mass inside 250 kpc (in 1014M⊙), and tilde beta is the power-law slope of the distribution. The resultant probability distribution function enables us to estimate the time-delay distribution in a lensing cluster of known mass. For a cluster with M250 = 2 × 1014M⊙, the fraction of time delays less than 1000 days is approximately 3%. Taking Abell 1689 as an example, its dark halo and brightest galaxies, with central velocity dispersions σ>=500kms-1, mainly produce large time delays, while galaxy-scale mass clumps are responsible for generating smaller time delays. We estimate the probability of observing multiple images of a supernova in the known images of Abell 1689. A two-component model of estimating the supernova rate is applied in this work. For a magnitude threshold of mAB = 26.5, the yearly rate of Type Ia (core-collapse) supernovae with time delays less than 1000 days is 0.004±0.002 (0.029±0.001). If the magnitude threshold is lowered to mAB ~ 27.0, the rate of core-collapse supernovae suitable for time delay observation is 0.044±0.015 per year.

  16. Disentangling random thermal motion of particles and collective expansion of source from transverse momentum spectra in high energy collisions

    NASA Astrophysics Data System (ADS)

    Wei, Hua-Rong; Liu, Fu-Hu; Lacey, Roy A.

    2016-12-01

    In the framework of a multisource thermal model, we describe experimental results of the transverse momentum spectra of final-state light flavor particles produced in gold-gold (Au-Au), copper-copper (Cu-Cu), lead-lead (Pb-Pb), proton-lead (p-Pb), and proton-proton (p -p) collisions at various energies, measured by the PHENIX, STAR, ALICE, and CMS Collaborations, by using the Tsallis-standard (Tsallis form of Fermi-Dirac or Bose-Einstein), Tsallis, and two- or three-component standard distributions which can be in fact regarded as different types of ‘thermometers’ or ‘thermometric scales’ and ‘speedometers’. A central parameter in the three distributions is the effective temperature which contains information on the kinetic freeze-out temperature of the emitting source and reflects the effects of random thermal motion of particles as well as collective expansion of the source. To disentangle both effects, we extract the kinetic freeze-out temperature from the intercept of the effective temperature (T) curve as a function of particle’s rest mass (m 0) when plotting T versus m 0, and the mean transverse flow velocity from the slope of the mean transverse momentum (< {p}T> ) curve as a function of mean moving mass (\\overline{m}) when plotting < {p}T> versus \\overline{m}.

  17. Separation of variables solution for non-linear radiative cooling

    NASA Technical Reports Server (NTRS)

    Siegel, Robert

    1987-01-01

    A separation of variables solution has been obtained for transient radiative cooling of an absorbing-scattering plane layer. The solution applies after an initial transient period required for adjustment of the temperature and scattering source function distributions. The layer emittance, equal to the instantaneous heat loss divided by the fourth power of the instantaneous mean temperature, becomes constant. This emittance is a function of only the optical thickness of the layer and the scattering albedo; its behavior as a function of these quantities is considerably different than for a layer at constant temperature.

  18. Probing the Differential Tissue Distribution and Bioaccumulation Behavior of Per- and Polyfluoroalkyl Substances of Varying Chain-Lengths, Isomeric Structures and Functional Groups in Crucian Carp.

    PubMed

    Shi, Yali; Vestergren, Robin; Nost, Therese Haugdahl; Zhou, Zhen; Cai, Yaqi

    2018-04-17

    Understanding the bioaccumulation mechanisms of per- and polyfluoroalkyl substances (PFASs) across different chain-lengths, isomers and functional groups represents a monumental scientific challenge with implications for chemical regulation. Here, we investigate how the differential tissue distribution and bioaccumulation behavior of 25 PFASs in crucian carp from two field sites impacted by point sources can provide information about the processes governing uptake, distribution and elimination of PFASs. Median tissue/blood ratios (TBRs) were consistently <1 for all PFASs and tissues except bile which displayed a distinct distribution pattern and enrichment of several perfluoroalkyl sulfonic acids. Transformation of concentration data into relative body burdens (RBBs) demonstrated that blood, gonads, and muscle together accounted for >90% of the amount of PFASs in the organism. Principal component analyses of TBRs and RBBs showed that the functional group was a relatively more important predictor of internal distribution than chain-length for PFASs. Whole body bioaccumulation factors (BAFs) for short-chain PFASs deviated from the positive relationship with hydrophobicity observed for longer-chain homologues. Overall, our results suggest that TBR, RBB, and BAF patterns were most consistent with protein binding mechanisms although partitioning to phospholipids may contribute to the accumulation of long-chain PFASs in specific tissues.

  19. Prototype of Partial Cutting Tool of Geological Map Images Distributed by Geological Web Map Service

    NASA Astrophysics Data System (ADS)

    Nonogaki, S.; Nemoto, T.

    2014-12-01

    Geological maps and topographical maps play an important role in disaster assessment, resource management, and environmental preservation. These map information have been distributed in accordance with Web services standards such as Web Map Service (WMS) and Web Map Tile Service (WMTS) recently. In this study, a partial cutting tool of geological map images distributed by geological WMTS was implemented with Free and Open Source Software. The tool mainly consists of two functions: display function and cutting function. The former function was implemented using OpenLayers. The latter function was implemented using Geospatial Data Abstraction Library (GDAL). All other small functions were implemented by PHP and Python. As a result, this tool allows not only displaying WMTS layer on web browser but also generating a geological map image of intended area and zoom level. At this moment, available WTMS layers are limited to the ones distributed by WMTS for the Seamless Digital Geological Map of Japan. The geological map image can be saved as GeoTIFF format and WebGL format. GeoTIFF is one of the georeferenced raster formats that is available in many kinds of Geographical Information System. WebGL is useful for confirming a relationship between geology and geography in 3D. In conclusion, the partial cutting tool developed in this study would contribute to create better conditions for promoting utilization of geological information. Future work is to increase the number of available WMTS layers and the types of output file format.

  20. Clinical data integration of distributed data sources using Health Level Seven (HL7) v3-RIM mapping

    PubMed Central

    2011-01-01

    Background Health information exchange and health information integration has become one of the top priorities for healthcare systems across institutions and hospitals. Most organizations and establishments implement health information exchange and integration in order to support meaningful information retrieval among their disparate healthcare systems. The challenges that prevent efficient health information integration for heterogeneous data sources are the lack of a common standard to support mapping across distributed data sources and the numerous and diverse healthcare domains. Health Level Seven (HL7) is a standards development organization which creates standards, but is itself not the standard. They create the Reference Information Model. RIM is developed by HL7's technical committees. It is a standardized abstract representation of HL7 data across all the domains of health care. In this article, we aim to present a design and a prototype implementation of HL7 v3-RIM mapping for information integration of distributed clinical data sources. The implementation enables the user to retrieve and search information that has been integrated using HL7 v3-RIM technology from disparate health care systems. Method and results We designed and developed a prototype implementation of HL7 v3-RIM mapping function to integrate distributed clinical data sources using R-MIM classes from HL7 v3-RIM as a global view along with a collaborative centralized web-based mapping tool to tackle the evolution of both global and local schemas. Our prototype was implemented and integrated with a Clinical Database management Systems CDMS as a plug-in module. We tested the prototype system with some use case scenarios for distributed clinical data sources across several legacy CDMS. The results have been effective in improving information delivery, completing tasks that would have been otherwise difficult to accomplish, and reducing the time required to finish tasks which are used in collaborative information retrieval and sharing with other systems. Conclusions We created a prototype implementation of HL7 v3-RIM mapping for information integration between distributed clinical data sources to promote collaborative healthcare and translational research. The prototype has effectively and efficiently ensured the accuracy of the information and knowledge extractions for systems that have been integrated PMID:22104558

  1. Nuclear risk analysis of the Ulysses mission

    NASA Astrophysics Data System (ADS)

    Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.

    1991-01-01

    The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.

  2. Resolution analysis of finite fault source inversion using one- and three-dimensional Green's functions 1. Strong motions

    USGS Publications Warehouse

    Graves, R.W.; Wald, D.J.

    2001-01-01

    We develop a methodology to perform finite fault source inversions from strong motion data using Green's functions (GFs) calculated for a three-dimensional (3-D) velocity structure. The 3-D GFs are calculated numerically by inserting body forces at each of the strong motion sites and then recording the resulting strains along the target fault surface. Using reciprocity, these GFs can be recombined to represent the ground motion at each site for any (heterogeneous) slip distribution on the fault. The reciprocal formulation significantly reduces the required number of 3-D finite difference computations to at most 3NS, where NS is the number of strong motion sites used in the inversion. Using controlled numerical resolution tests, we have examined the relative importance of accurate GFs for finite fault source inversions which rely on near-source ground motions. These experiments use both 1-D and 3-D GFs in inversions for hypothetical rupture models in order (1) to analyze the ability of the 3-D methodology to resolve trade-offs between complex source phenomena and 3-D path effects, (2) to address the sensitivity of the inversion results to uncertainties in the 3-D velocity structure, and (3) to test the adequacy of the 1-D GF method when propagation effects are known to be three-dimensional. We find that given "data" from a prescribed 3-D Earth structure, the use of well-calibrated 3-D GFs in the inversion provides very good resolution of the assumed slip distribution, thus adequately separating source and 3-D propagation effects. In contrast, using a set of inexact 3-D GFs or a set of hybrid 1-D GFs allows only partial recovery of the slip distribution. These findings suggest that in regions of complex geology the use of well-calibrated 3-D GFs has the potential for increased resolution of the rupture process relative to 1-D GFs. However, realizing this full potential requires that the 3-D velocity model and associated GFs should be carefully validated against the true 3-D Earth structure before performing the inverse problem with actual data. Copyright 2001 by the American Geophysical Union.

  3. Ambient Noise Interferometry and Surface Wave Array Tomography: Promises and Problems

    NASA Astrophysics Data System (ADS)

    van der Hilst, R. D.; Yao, H.; de Hoop, M. V.; Campman, X.; Solna, K.

    2008-12-01

    In the late 1990ies most seismologists would have frowned at the possibility of doing high-resolution surface wave tomography with noise instead of with signal associated with ballistic source-receiver propagation. Some may still do, but surface wave tomography with Green's functions estimated through ambient noise interferometry ('sourceless tomography') has transformed from a curiosity into one of the (almost) standard tools for analysis of data from dense seismograph arrays. Indeed, spectacular applications of ambient noise surface wave tomography have recently been published. For example, application to data from arrays in SE Tibet revealed structures in the crust beneath the Tibetan plateau that could not be resolved by traditional tomography (Yao et al., GJI, 2006, 2008). While the approach is conceptually simple, in application the proverbial devil is in the detail. Full reconstruction of the Green's function requires that the wavefields used are diffusive and that ambient noise energy is evenly distributed in the spatial dimensions of interest. In the field, these conditions are not usually met, and (frequency dependent) non-uniformity of the noise sources may lead to incomplete reconstruction of the Green's function. Furthermore, ambient noise distributions can be time-dependent, and seasonal variations have been documented. Naive use of empirical Green's functions may produce (unknown) bias in the tomographic models. The degrading effect on EGFs of the directionality of noise distribution forms particular challenges for applications beyond isotropic surface wave inversions, such as inversions for (azimuthal) anisotropy and attempts to use higher modes (or body waves). Incomplete Green's function reconstruction can (probably) not be prevented, but it may be possible to reduce the problem and - at least - understand the degree of incomplete reconstruction and prevent it from degrading the tomographic model. We will present examples of Rayleigh wave inversions and discuss strategies to mitigate effects of incomplete Green's function reconstruction on tomographic images.

  4. CHANGES IN BACTERIAL COMPOSITION OF BIOFILM IN A ...

    EPA Pesticide Factsheets

    This study examined the development of bacterial biofilms within a metropolitan distribution system. The distribution system is fed with different source water (i.e., groundwater, GW and surface water, SW) and undergoes different treatment processes in separate facilities. The biofilm community was characterized using 16S rRNA gene clone libraries and functional potential analysis, generated from total DNA extracted from coupons in biofilm annular reactors fed with onsite drinking water for up to eighteen months. Significant differences in the bacterial community structure were observed between GW and SW. Representatives that explained the dissimilarity between service areas were associated with Betaproteobacteria, Alphaproteobacteria, Actinobacteria, Gammaproteobacteria, and Firmicutes. After nine months the biofilm bacterial community from both areas were dominated by Mycobacterium species. The distribution of the dominant OTU (Mycobacterium) positively correlated with the drinking water distribution system (DWDS) temperature, but no clear relationship was seen with free chlorine residual, pH, turbidity or total organic carbon (TOC). The results suggest that biofilm microbial communities harbor distinct and diverse bacterial communities, and that source water, treatment processes and environmental conditions may play an important role in shaping the bacterial community in the distribution system. On the other hand, several bacterial groups were present i

  5. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    NASA Astrophysics Data System (ADS)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  6. creation of unificed geoinformation system for monitoring social and economic developments of departamento quindio (colombia) on the basis of the isolated data sources

    NASA Astrophysics Data System (ADS)

    Buravlev, V.; Sereshnikov, S. V.; Mayorov, A. A.; Vila, J. J.

    At each level of the state and municipal management the information resources which provide the support of acceptance of administrative decisions, usually are performed as a number of polytypic, untied among themselves electronic data sources, such as databases, geoinformation projects, electronic archives of documents, etc. These sources are located in the various organizations, they function in various programs, and are actualized according to various rules. Creation on the basis of such isolated sources of the uniform information systems which provide an opportunity to look through and analyze any information stored in these sources in real time mode, will help to promote an increase in a degree of adequacy of accepted administrative decisions. The Distributed Data Service technology - TrisoftDDS, developed by company Trisoft, Ltd, provides the construction of horizontal territorially distributed heterogeneous information systems (TeRGIS). Technology TrisoftDDS allows the quickly creation and support, easy modification of systems, the data sources for which are already existing information complexes, without any working capacity infringements of the last ones, and provides the remote regulated multi-user access to the different types of data sources by the Internet/Intranet. Relational databases, GIS projects, files of various types (documents MS Office, images, html documents, etc.) can be used as data sources in TeRGIS. TeRGIS is created as Internet/Intranet application representing three-level client-server system. Access to the information in existing data sources is carried out by means of the distributed DDS data service, the nucleus of which is the distributed data service server - the DSServer, settling down on an intermediate level. TrisoftDDS Technology includes the following components: Client DSBrowser (Data Service Browser) - the client application connected through the Internet/intranet to the DSServer and provides both - a choice and viewing of documents. Tables of databases, inquiries to databases, inquiries to geoinformation projects, files of various types (documents MS Office, images, html files, etc.) can act as documents. For work with complex data sources the DSBrowser gives an opportunity to create inquiries, to execute data view and filter. Server of the distributed data service - DSServer (Data Service Server) - the web-application that provides the access to the data sources and performance of the client's inquiries on granting of chosen documents. Tool means - Toolkit DDS: the Manager of catalogue - the DCMan (Data Catalog Manager) - - the client-server application intended for the organization and administration of the data catalogue. Documentator - the DSDoc (Data Source Documentor) - the client-server application intended for documenting the procedure of formation of the required document from the data source. The documentation, created by the DBDoc, represents the metadata tables, which are included in the data catalogue with the help of the catalogue manager - the DSCMan. The functioning logic of territorially distributed heterogeneous information system, based on DDS technology, is following: Client application - DSBrowser addresses to the DSServer on specified Internet address. In reply to the reference the DSServer sends the client the catalogue of the system's information resources. The catalogue represents the xml-document which is processed by the client's browser and is deduced as tree - structure in a special window. The user of the application looks through the list and chooses necessary documents, the DSBrowser sends corresponding inquiry to the DSServer. The DSServer, in its turn, addresses to the metadata tables, which describe the document, chosen by user, and broadcasts inquiry to the corresponding data source and after this returns to the client application the result of the inquiry. The catalogue of the data services contains the full Internet address of the document. This allows to create catalogues of the distributed information resources, separate parts of which (documents) can be located on different servers in various places of Internet. Catalogues, thus, can separately settle down at anyone Internet provider, which supports the necessary software. Lists of documents in the catalogue gather in the thematic blocks, allowing to organize user-friendly navigation down the information sources of the system. The TrisoftDDS technology perspectives, first of all, consist of the organization and the functionality of the distributed data service which process inquiries about granting of documents. The distributed data service allows to hide the complex and, in most cases, not necessary features of structure of complex data sources and ways of connection to them from the external user. Instead of this, user receives pseudonyms of connections and file directories, the real parameters of which are stored in the register of the web-server, which hosts the DSServer. Such scheme gives also wide opportunities of the data protection and differentiations of access rights to the information. The technology of creation of horizontal territory distributed geoinformation systems with the purpose of the territorial social and economic development level classification of Quindio Departamento (Columbia) is also given in this work. This technology includes the creation of thematic maps on the base of ESRI software products - Arcview and Erdas. It also shows and offer some ways of regional social and economic development conditions analysis for comparison of optimality of the decision. This technology includes the following parameters: dynamics of demographic processes; education; health and a feed; infrastructure; political and social stability; culture, social and family values; condition of an environment; political and civil institutes; profitableness of the population; unemployment, use of a labour; poverty and not equality. The methodology allows to include other parameters with the help of an expert estimations method and optimization theories and there is also a module for the forecast check by field checks on district.

  7. The Effects of Weather Patterns on the Spatio-Temporal Distribution of SO2 over East Asia as Seen from Satellite Measurements

    NASA Astrophysics Data System (ADS)

    Dunlap, L.; Li, C.; Dickerson, R. R.; Krotkov, N. A.

    2015-12-01

    Weather systems, particularly mid-latitude wave cyclones, have been known to play an important role in the short-term variation of near-surface air pollution. Ground measurements and model simulations have demonstrated that stagnant air and minimal precipitation associated with high pressure systems are conducive to pollutant accumulation. With the passage of a cold front, built up pollution is transported downwind of the emission sources or washed out by precipitation. This concept is important to note when studying long-term changes in spatio-temporal pollution distribution, but has not been studied in detail from space. In this study, we focus on East Asia (especially the industrialized eastern China), where numerous large power plants and other point sources as well as area sources emit large amounts of SO2, an important gaseous pollutant and a precursor of aerosols. Using data from the Aura Ozone Monitoring Instrument (OMI) we show that such weather driven distribution can indeed be discerned from satellite data by utilizing probability distribution functions (PDFs) of SO2 column content. These PDFs are multimodal and give insight into the background pollution level at a given location and contribution from local and upwind emission sources. From these PDFs it is possible to determine the frequency for a given region to have SO2 loading that exceeds the background amount. By comparing OMI-observed long-term change in the frequency with meteorological data, we can gain insights into the effects of climate change (e.g., the weakening of Asian monsoon) on regional air quality. Such insight allows for better interpretation of satellite measurements as well as better prediction of future pollution distribution as a changing climate gives way to changing weather patterns.

  8. Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region

    NASA Astrophysics Data System (ADS)

    Chaudhary, Chhavi; Sharma, Mukat Lal

    2017-12-01

    Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.

  9. Shock-drift particle acceleration in superluminal shocks - A model for hot spots in extragalactic radio sources

    NASA Technical Reports Server (NTRS)

    Begelman, Mitchell C.; Kirk, John G.

    1990-01-01

    Shock-drift acceleration at relativistic shock fronts is investigated using a fully relativistic treatment of both the microphysics of the shock-drift acceleration and the macrophysics of the shock front. By explicitly tracing particle trajectories across shocks, it is shown how the adiabatic invariance of a particle's magnetic moment breaks down as the upstream shock speed becomes relativistic, and is recovered at subrelativistic velocities. These calculations enable the mean increase in energy of a particle which encounters the shock with a given pitch angle to be calculated. The results are used to construct the downstream electron distribution function in terms of the incident distribution function and the bulk properties of the shock. The synchrotron emissivity of the transmitted distribution is calculated, and it is demonstrated that amplification factors are easily obtained which are more than adequate to explain the observed constrasts in surface brightness between jets and hot spots.

  10. A new method of optimal capacitor switching based on minimum spanning tree theory in distribution systems

    NASA Astrophysics Data System (ADS)

    Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.

    2018-03-01

    According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.

  11. Evaluation of substitution monopole models for tire noise sound synthesis

    NASA Astrophysics Data System (ADS)

    Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.

    2010-01-01

    Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.

  12. Smart grid technologies in local electric grids

    NASA Astrophysics Data System (ADS)

    Lezhniuk, Petro D.; Pijarski, Paweł; Buslavets, Olga A.

    2017-08-01

    The research is devoted to the creation of favorable conditions for the integration of renewable sources of energy into electric grids, which were designed to be supplied from centralized generation at large electric power stations. Development of distributed generation in electric grids influences the conditions of their operation - conflict of interests arises. The possibility of optimal functioning of electric grids and renewable sources of energy, when complex criterion of the optimality is balance reliability of electric energy in local electric system and minimum losses of electric energy in it. Multilevel automated system for power flows control in electric grids by means of change of distributed generation of power is developed. Optimization of power flows is performed by local systems of automatic control of small hydropower stations and, if possible, solar power plants.

  13. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  14. Pressure dependence of an ion beam accelerating structure in an expanding helicon plasma

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Aguirre, Evan; Thompson, Derek S.; McKee, John; Henriquez, Miguel; Scime, Earl E.

    2018-02-01

    We present measurements of the parallel ion velocity distribution function and electric field in an expanding helicon source plasma plume as a function of downstream gas pressure and radial and axial positions. The ion beam that appears spontaneously in the plume persists for all downstream pressures investigated, with the largest parallel ion beam velocities obtained for the lowest downstream pressures. However, the change in ion beam velocity exceeds what would be expected simply for a change in the collisionality of the system. Electric field measurements confirm that it is the magnitude of the potential structure responsible for accelerating the ion beam that changes with downstream pressure. Interestingly, the ion density radial profile is hollow close to the end of the plasma source for all pressures, but it is hollow at downstream distances far from the source only at the highest downstream neutral pressures.

  15. Study of the thermal distribution in vocal cords irradiated by an optical source for the treatment of voice disabilities

    NASA Astrophysics Data System (ADS)

    Arce-Diego, José L.; Fanjul-Vélez, Félix; Borragán-Torre, Alfonso

    2006-02-01

    Vocal cords disorders constitute an important problem for people suffering from them. Particularly the reduction of mucosal wave movement is not appropriately treated by conventional therapies, like drugs administration or surgery. In this work, an alternative therapy, consisting in controlled temperature increases by means of optical sources is proposed. The distribution of heat inside vocal cords when an optical source illuminates them is studied. Optical and thermal properties of tissue are discussed, as a basis for the appropriate knowledge of its behaviour. Propagation of light is shown using the Radiation Transfer Theory (RTT) and a numerical Monte Carlo model. A thermal transfer model, that uses the results of the propagation of radiation, determines the distribution of temperature in the tissue. Two widely used lasers are considered, Nd:YAG (1064 nm) and KTP (532 nm). Adequate amounts of radiation, resulting in temperature rise, must be achieved in order to avoid damage in vocal cords and so to assure an improvement in the vocal functions of the patient. The limits in temperature should be considered with a combined temperature-time and Arrhenius analysis.

  16. Location error uncertainties - an advanced using of probabilistic inverse theory

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2016-04-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.

  17. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.

  18. Functional morphology of the sound-generating labia in the syrinx of two songbird species.

    PubMed

    Riede, Tobias; Goller, Franz

    2010-01-01

    In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function.

  19. Functional morphology of the sound-generating labia in the syrinx of two songbird species

    PubMed Central

    Riede, Tobias; Goller, Franz

    2010-01-01

    In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function. PMID:19900184

  20. Sea-State Dependence of Aerosol Concentration in the Marine Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Lenain, L.; Melville, W. K.

    2016-02-01

    While sea spray aerosols represent a large portion of the aerosols present in the marine environment, and despite evidence of the importance of surface wave and wave-breaking related processes in the coupling of the ocean with the atmosphere, sea spray source generation functions are traditionally parameterized by the wind speed at 10m. It is clear that unless the wind and wave field are fully developed, the source function will be a function of both wind and wave parameters. In this study, we report on an air-sea interaction experiment, the ONR phase-resolved High-Resolution Air-Sea Interaction experiments (HIRES), conducted off the coast of Northern California in June 2010. Detailed measurements of aerosol number concentration in the Marine Atmospheric Boundary Layer (MABL), at altitudes ranging from as low as 30m and up to 800m AMSL over a broad range of environmental conditions (significant wave height, Hs, of 2 to 4.5m and wind speed at 10m height, U10, of 10 to 18 m/s) collected from an instrumented research aircraft, are presented. Aerosol number densities and volume are computed over a range of particle diameters from 0.1 to 200 µm, while the surface conditions, i.e. significant wave height, moments of the breaker length distribution Λ(c), and wave breaking dissipation, were measured by a suite of electro-optical sensors that included the NASA Airborne Topographic Mapper (ATM). The sea-state dependence of the aerosol concentration in the MABL is evident, ultimately stressing the need to incorporate wave and wave kinematics in the spray source generation functions that are traditionally primarily parameterized by surface winds. A scaling of the measured aerosol volume distribution by wave and atmospheric state variables is proposed.

  1. Passive detection and localization of fatigue cracking in aluminum plates using Green's function reconstruction from ambient noise.

    PubMed

    Yang, Yang; Xiao, Li; Qu, Wenzhong; Lu, Ye

    2017-11-01

    Recent theoretical and experimental studies have demonstrated that a local Green's function can be retrieved from the cross-correlation of ambient noise field. This technique can be used to detect fatigue cracking in metallic structures, owing to the fact that the presence of crack can lead to a change in Green's function. This paper presents a method of structural fatigue cracking characterization method by measuring Green's function reconstruction from noise excitation and verifies the feasibility of crack detection in poor noise source distribution. Fatigue cracks usually generate nonlinear effects, in which different wave amplitudes and frequency compositions can cause different nonlinear responses. This study also undertakes analysis of the capacity of the proposed approach to identify fatigue cracking under different noise amplitudes and frequency ranges. Experimental investigations of an aluminum plate are conducted to assess the cross-correlations of received noise between sensor pairs and finally to detect the introduced fatigue crack. A damage index is proposed according to the variation between cross-correlations obtained from the pristine crack closed state and the crack opening-closure state when sufficient noise amplitude is used to generate nonlinearity. A probability distribution map of damage is calculated based on damage indices. The fatigue crack introduced in the aluminum plate is successfully identified and oriented, verifying that a fatigue crack can be detected by reconstructing Green's functions from an imperfect diffuse field in which ambient noise sources exist locally. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  3. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  4. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  5. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  6. Waveform inversion of volcano-seismic signals for an extended source

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.

  7. Pacific Yew: A Facultative Riparian Conifer with an Uncertain Future

    Treesearch

    Stanley Scher; Bert Schwarzschild

    1989-01-01

    Increasing demands for Pacific yew bark, a source of an anticancer agent, have generated interest in defining the yew resource and in exploring strategies to conserve this species. The distribution, riparian requirements and ecosystem functions of yew populations in coastal and inland forests of northern California are outlined and alternative approaches to conserving...

  8. The impact of circulation control on rotary aircraft controls systems

    NASA Technical Reports Server (NTRS)

    Kingloff, R. F.; Cooper, D. E.

    1987-01-01

    Application of circulation to rotary wing systems is a new development. Efforts to determine the near and far field flow patterns and to analytically predict those flow patterns have been underway for some years. Rotary wing applications present a new set of challenges in circulation control technology. Rotary wing sections must accommodate substantial Mach number, free stream dynamic pressure and section angle of attack variation at each flight condition within the design envelope. They must also be capable of short term circulation blowing modulation to produce control moments and vibration alleviation in addition to a lift augmentation function. Control system design must provide this primary control moment, vibration alleviation and lift augmentation function. To accomplish this, one must simultaneously control the compressed air source and its distribution. The control law algorithm must therefore address the compressor as the air source, the plenum as the air pressure storage and the pneumatic flow gates or valves that distribute and meter the stored pressure to the rotating blades. Also, mechanical collective blade pitch, rotor shaft angle of attack and engine power control must be maintained.

  9. Fabrication and properties of multilayer structures

    NASA Astrophysics Data System (ADS)

    Tiller, W. A.

    1983-09-01

    The synthesis of SiC films and Pd2Si films via single source and dual source sputtering, respectively, has been experimentally investigated while the reactive sputter deposition of SiO sub x films has been theoretically analyzed. The SiO sub x film data requires a mobile precursor adsorption process to be operative for the oxygen and an oxygen sticking coefficient of between 1.56 x 10 to the minus 3rd power and 4.17 x 10 to the minus 3rd power. An analysis of in-situ electrical diagnostics of the films via a non-contact technique shows the method to be of marginal accuracy for the example selected. An important new formulation of the stress and elastic constant tensors in the vicinity of interfaces has been developed and applied to the simple example of adsorbed layer/substrate interactions via a parametric analysis. Atomic modeling of the SiO system yields peroxide bond formation for oxygen-rich (100) alpha-cristobalite surfaces. Radial distribution function and angular distribution function data have been calculated for bulk alpha-quartz and bulk alpha-cristobalite in good agreement with experiment.

  10. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  11. The dappled nature of causes of psychiatric illness: replacing the organic-functional/hardware-software dichotomy with empirically based pluralism.

    PubMed

    Kendler, K S

    2012-04-01

    Our tendency to see the world of psychiatric illness in dichotomous and opposing terms has three major sources: the philosophy of Descartes, the state of neuropathology in late nineteenth century Europe (when disorders were divided into those with and without demonstrable pathology and labeled, respectively, organic and functional), and the influential concept of computer functionalism wherein the computer is viewed as a model for the human mind-brain system (brain=hardware, mind=software). These mutually re-enforcing dichotomies, which have had a pernicious influence on our field, make a clear prediction about how 'difference-makers' (aka causal risk factors) for psychiatric disorders should be distributed in nature. In particular, are psychiatric disorders like our laptops, which when they dysfunction, can be cleanly divided into those with software versus hardware problems? I propose 11 categories of difference-makers for psychiatric illness from molecular genetics through culture and review their distribution in schizophrenia, major depression and alcohol dependence. In no case do these distributions resemble that predicted by the organic-functional/hardware-software dichotomy. Instead, the causes of psychiatric illness are dappled, distributed widely across multiple categories. We should abandon Cartesian and computer-functionalism-based dichotomies as scientifically inadequate and an impediment to our ability to integrate the diverse information about psychiatric illness our research has produced. Empirically based pluralism provides a rigorous but dappled view of the etiology of psychiatric illness. Critically, it is based not on how we wish the world to be but how the difference-makers for psychiatric illness are in fact distributed.

  12. Note: Precise radial distribution of charged particles in a magnetic guiding field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backe, H., E-mail: backe@kph.uni-mainz.de

    2015-07-15

    Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSFmore » is presented which can numerically be evaluated with arbitrary precision.« less

  13. A complete analytical solution of the Fokker-Planck and balance equations for nucleation and growth of crystals

    NASA Astrophysics Data System (ADS)

    Makoveeva, Eugenya V.; Alexandrov, Dmitri V.

    2018-01-01

    This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  14. Atomic pair distribution function at the Brazilian Synchrotron Light Laboratory: application to the Pb 1–x La xZr 0.40Ti 0.60O 3 ferroelectric system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleta, M. E.; Eleotério, M.; Mesquita, A.

    2017-07-29

    This work reports the setting up of the X-ray diffraction and spectroscopy beamline at the Brazilian Synchrotron Light Laboratory for performing total scattering experiments to be analyzed by atomic pair distribution function (PDF) studies. The results of a PDF refinement for Al 2O 3 standard are presented and compared with data acquired at a beamline of the Advanced Photon Source, where it is common to perform this type of experiment. A preliminary characterization of the Pb 1–xLa xZr 0.40Ti 0.60O 3 ferroelectric system, withx= 0.11, 0.12 and 0.15, is also shown.

  15. A Kinetic Study of Microwave Start-up of Tokamak Plasmas

    NASA Astrophysics Data System (ADS)

    du Toit, E. J.; O'Brien, M. R.; Vann, R. G. L.

    2017-07-01

    A kinetic model for studying the time evolution of the distribution function for microwave startup is presented. The model for the distribution function is two dimensional in momentum space, but, for simplicity and rapid calculations, has no spatial dependence. Experiments on the Mega Amp Spherical Tokamak have shown that the plasma current is carried mainly by electrons with energies greater than 70 keV, and effects thought to be important in these experiments are included, i.e. particle sources, orbital losses, the loop voltage and microwave heating, with suitable volume averaging where necessary to give terms independent of spatial dimensions. The model predicts current carried by electrons with the same energies as inferred from the experiments, though the current drive efficiency is smaller.

  16. RADIATION WAVE DETECTION

    DOEpatents

    Wouters, L.F.

    1960-08-30

    Radiation waves can be detected by simultaneously measuring radiation- wave intensities at a plurality of space-distributed points and producing therefrom a plot of the wave intensity as a function of time. To this end. a detector system is provided which includes a plurality of nuclear radiation intensity detectors spaced at equal radial increments of distance from a source of nuclear radiation. Means are provided to simultaneously sensitize the detectors at the instant a wave of radiation traverses their positions. the detectors producing electrical pulses indicative of wave intensity. The system further includes means for delaying the pulses from the detectors by amounts proportional to the distance of the detectors from the source to provide an indication of radiation-wave intensity as a function of time.

  17. Implications of the Observed Ultraluminous X-Ray Source Luminosity Function

    NASA Technical Reports Server (NTRS)

    Swartz, Douglas A.; Tennant, Allyn; Soria, Roberto; Yukita, Mihoko

    2012-01-01

    We present the X-ray luminosity function (XLF) of ultraluminous X-ray (ULX) sources with 0.3-10.0 keV luminosities in excess of 10(sup 39) erg/s in a complete sample of nearby galaxies. The XLF shows a break or cut-off at high luminosities that deviates from its pure power law distribution at lower luminosities. The cut-off is at roughly the Eddington luminosity for a 90-140 solar mass accretor. We examine the effects on the observed XLF of sample biases, of small-number statistics (at the high luminosity end) and of measurement uncertainties. We consider the physical implications of the shape and normalization of the XLF. The XLF is also compared and contrasted to results of other recent surveys.

  18. LED lamp or bulb with remote phosphor and diffuser configuration with enhanced scattering properties

    DOEpatents

    Tong, Tao; Le Toquin, Ronan; Keller, Bernd; Tarsa, Eric; Youmans, Mark; Lowes, Theodore; Medendorp, Jr., Nicholas W; Van De Ven, Antony; Negley, Gerald

    2014-11-11

    An LED lamp or bulb is disclosed that comprises a light source, a heat sink structure and an optical cavity. The optical cavity comprises a phosphor carrier having a conversions material and arranged over an opening to the cavity. The phosphor carrier comprises a thermally conductive transparent material and is thermally coupled to the heat sink structure. An LED based light source is mounted in the optical cavity remote to the phosphor carrier with light from the light source passing through the phosphor carrier. A diffuser dome is included that is mounted over the optical cavity, with light from the optical cavity passing through the diffuser dome. The properties of the diffuser, such as geometry, scattering properties of the scattering layer, surface roughness or smoothness, and spatial distribution of the scattering layer properties may be used to control various lamp properties such as color uniformity and light intensity distribution as a function of viewing angle.

  19. Altered Cortical Swallowing Processing in Patients with Functional Dysphagia: A Preliminary Study

    PubMed Central

    Wollbrink, Andreas; Warnecke, Tobias; Winkels, Martin; Pantev, Christo; Dziewas, Rainer

    2014-01-01

    Objective Current neuroimaging research on functional disturbances provides growing evidence for objective neuronal correlates of allegedly psychogenic symptoms, thereby shifting the disease concept from a psychological towards a neurobiological model. Functional dysphagia is such a rare condition, whose pathogenetic mechanism is largely unknown. In the absence of any organic reason for a patient's persistent swallowing complaints, sensorimotor processing abnormalities involving central neural pathways constitute a potential etiology. Methods In this pilot study we measured cortical swallow-related activation in 5 patients diagnosed with functional dysphagia and a matched group of healthy subjects applying magnetoencephalography. Source localization of cortical activation was done with synthetic aperture magnetometry. To test for significant differences in cortical swallowing processing between groups, a non-parametric permutation test was afterwards performed on individual source localization maps. Results Swallowing task performance was comparable between groups. In relation to control subjects, in whom activation was symmetrically distributed in rostro-medial parts of the sensorimotor cortices of both hemispheres, patients showed prominent activation of the right insula, dorsolateral prefrontal cortex and lateral premotor, motor as well as inferolateral parietal cortex. Furthermore, activation was markedly reduced in the left medial primary sensory cortex as well as right medial sensorimotor cortex and adjacent supplementary motor area (p<0.01). Conclusions Functional dysphagia - a condition with assumed normal brain function - seems to be associated with distinctive changes of the swallow-related cortical activation pattern. Alterations may reflect exaggerated activation of a widely distributed vigilance, self-monitoring and salience rating network that interferes with down-stream deglutition sensorimotor control. PMID:24586948

  20. Transferability of species distribution models: a functional habitat approach for two regionally threatened butterflies.

    PubMed

    Vanreusel, Wouter; Maes, Dirk; Van Dyck, Hans

    2007-02-01

    Numerous models for predicting species distribution have been developed for conservation purposes. Most of them make use of environmental data (e.g., climate, topography, land use) at a coarse grid resolution (often kilometres). Such approaches are useful for conservation policy issues including reserve-network selection. The efficiency of predictive models for species distribution is usually tested on the area for which they were developed. Although highly interesting from the point of view of conservation efficiency, transferability of such models to independent areas is still under debate. We tested the transferability of habitat-based predictive distribution models for two regionally threatened butterflies, the green hairstreak (Callophrys rubi) and the grayling (Hipparchia semele), within and among three nature reserves in northeastern Belgium. We built predictive models based on spatially detailed maps of area-wide distribution and density of ecological resources. We used resources directly related to ecological functions (host plants, nectar sources, shelter, microclimate) rather than environmental surrogate variables. We obtained models that performed well with few resource variables. All models were transferable--although to different degrees--among the independent areas within the same broad geographical region. We argue that habitat models based on essential functional resources could transfer better in space than models that use indirect environmental variables. Because functional variables can easily be interpreted and even be directly affected by terrain managers, these models can be useful tools to guide species-adapted reserve management.

  1. GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, M.

    1959-06-01

    GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less

  2. Theoretical Investigation of the High-Altitude Cusp Region using Observations from Interball and ISTP Spacecraft

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, Maha

    1998-01-01

    A fundamental goal of magnetospheric physics is to understand the transport of plasma through the solar wind-magnetosphere-ionosphere system. To attain such an understanding, we must determine the sources of the plasma, the trajectories of the particles through the magnetospheric electric and magnetic fields to the point of observation, and the acceleration processes they undergo enroute. This study employed plasma distributions observed in the near-Earth plasma sheet by Interball and Geotail spacecraft together with theoretical techniques to investigate the ion sources and the transport of plasma. We used ion trajectory calculations in magnetic and electric fields from a global Magnetohydrodynamics (MHD) simulation to investigate the transport and to identify common ion sources for ions observed in the near-Earth magnetotail by the Interball and Geotail spacecraft. Our first step was to examine a number of distribution functions and identify distinct boundaries in both configuration and phase space that are indicative of different plasma sources and transport mechanisms. We examined events from October 26, 1995, November 29-30, 1996, and December 22, 1996. During the first event Interball and Geotail were separated by approximately 10 R(sub E) in z, and during the second event the spacecraft were separated by approximately 4(sub RE). Both of these events had a strong IMF By component pointing toward the dawnside. On October 26, 1995, the IMF B(sub Z) component was northward, and on November 1-9-30, 1996, the IMF B sub Z) component was near 0. During the first event, Geotail was located near the equator on the dawn flank, while Interball was for the most part in the lobe region. The distribution function from the Coral instrument on Interball showed less structure and resembled a drifting Maxwellian. The observed distribution on Geotail, on the other hand, included a great number of structures at both low and high energies. During the third event (December 22, 1996) both spacecraft were in the plasma sheet and were separated bY approximately 20 R(sub E) in the y direction. During this event the IMF was southward.

  3. Moment Analysis Characterizing Water Flow in Repellent Soils from On- and Sub-Surface Point Sources

    NASA Astrophysics Data System (ADS)

    Xiong, Yunwu; Furman, Alex; Wallach, Rony

    2010-05-01

    Water repellency has a significant impact on water flow patterns in the soil profile. Flow tends to become unstable in such soils, which affects the water availability to plants and subsurface hydrology. In this paper, water flow in repellent soils was experimentally studied using the light reflection method. The transient 2D moisture profiles were monitored by CCD camera for tested soils packed in a transparent flow chamber. Water infiltration experiments and subsequent redistribution from on-surface and subsurface point sources with different flow rates were conducted for two soils of different repellency degrees as well as for wettable soil. We used spatio-statistical analysis (moments) to characterize the flow patterns. The zeroth moment is related to the total volume of water inside the moisture plume, and the first and second moments are affinitive to the center of mass and spatial variances of the moisture plume, respectively. The experimental results demonstrate that both the general shape and size of the wetting plume and the moisture distribution within the plume for the repellent soils are significantly different from that for the wettable soil. The wetting plume of the repellent soils is smaller, narrower, and longer (finger-like) than that of the wettable soil compared with that for the wettable soil that tended to roundness. Compared to the wettable soil, where the soil water content decreases radially from the source, moisture content for the water-repellent soils is higher, relatively uniform horizontally and gradually increases with depth (saturation overshoot), indicating that flow tends to become unstable. Ellipses, defined around the mass center and whose semi-axes represented a particular number of spatial variances, were successfully used to simulate the spatial and temporal variation of the moisture distribution in the soil profiles. Cumulative probability functions were defined for the water enclosed in these ellipses. Practically identical cumulative probability functions (beta distribution) were obtained for all soils, all source types, and flow rates. Further, same distributions were obtained for the infiltration and redistribution processes. This attractive result demonstrates the competence and advantage of the moment analysis method.

  4. Matilda: A mass filtered nanocluster source

    NASA Astrophysics Data System (ADS)

    Kwon, Gihan

    Cluster science provides a good model system for the study of the size dependence of electronic properties, chemical reactivity, as well as magnetic properties of materials. One of the main interests in cluster science is the nanoscale understanding of chemical reactions and selectivity in catalysis. Therefore, a new cluster system was constructed to study catalysts for applications in renewable energy. Matilda, a nanocluster source, consists of a cluster source and a Retarding Field Analyzer (RFA). A moveable AJA A310 Series 1"-diameter magnetron sputtering gun enclosed in a water cooled aggregation tube served as the cluster source. A silver coin was used for the sputtering target. The sputtering pressure in the aggregation tube was controlled, ranging from 0.07 to 1torr, using a mass flow controller. The mean cluster size was found to be a function of relative partial pressure (He/Ar), sputtering power, and aggregation length. The kinetic energy distribution of ionized clusters was measured with the RFA. The maximum ion energy distribution was 2.9 eV/atom at a zero pressure ratio. At high Ar flow rates, the mean cluster size was 20 ˜ 80nm, and at a 9.5 partial pressure ratio, the mean cluster size was reduced to 1.6nm. Our results showed that the He gas pressure can be optimized to reduce the cluster size variations. Results from SIMION, which is an electron optics simulation package, supported the basic function of an RFA, a three-element lens and the magnetic sector mass filter. These simulated results agreed with experimental data. For the size selection experiment, the channeltron electron multiplier collected ionized cluster signal at different positions during Ag deposition on a TEM grid for four and half hours. The cluster signal was high at the position for neutral clusters, which was not bent by a magnetic field, and the signal decreased rapidly far away from the neutral cluster region. For cluster separation according to mass to charge ratio in a magnetic sector mass filter, the ion energy of the cluster and its distribution must be precisely controlled by acceleration or deceleration. To verify the size separation, a high resolution microscope was required. Matilda provided narrow particle sized distribution from atomic scale to 4nm in size with different pressure ratio without additional mass filter. It is very economical way to produce relatively narrow particle size distribution.

  5. Potential source identification for aerosol concentrations over a site in Northwestern India

    NASA Astrophysics Data System (ADS)

    Payra, Swagata; Kumar, Pramod; Verma, Sunita; Prakash, Divya; Soni, Manish

    2016-03-01

    The collocated measurements of aerosols size distribution (ASD) and aerosol optical thickness (AOT) are analyzed simultaneously using Grimm aerosol spectrometer and MICROTOP II Sunphotometer over Jaipur, capital of Rajasthan in India. The contrast temperature characteristics during winter and summer seasons of year 2011 are investigated in the present study. The total aerosol number concentration (TANC, 0.3-20 μm) during winter season was observed higher than in summer time and it was dominated by fine aerosol number concentration (FANC < 2 μm). Particles smaller than 0.8 μm (at aerodynamic size) constitute ~ 99% of all particles in winter and ~ 90% of particles in summer season. However, particles greater than 2 μm contribute ~ 3% and ~ 0.2% in summer and winter seasons respectively. The aerosols optical thickness shows nearly similar AOT values during summer and winter but corresponding low Angstrom Exponent (AE) values during summer than winter, respectively. In this work, Potential Source Contribution Function (PSCF) analysis is applied to identify locations of sources that influenced concentrations of aerosols over study area in two different seasons. PSCF analysis shows that the dust particles from Thar Desert contribute significantly to the coarse aerosol number concentration (CANC). Higher values of the PSCF in north from Jaipur showed the industrial areas in northern India to be the likely sources of fine particles. The variation in size distribution of aerosols during two seasons is clearly reflected in the log normal size distribution curves. The log normal size distribution curves reveals that the particle size less than 0.8 μm is the key contributor in winter for higher ANC.

  6. Characterizing open and non-uniform vertical heat sources: towards the identification of real vertical cracks in vibrothermography experiments

    NASA Astrophysics Data System (ADS)

    Castelo, A.; Mendioroz, A.; Celorrio, R.; Salazar, A.; López de Uralde, P.; Gorosmendi, I.; Gorostegui-Colinas, E.

    2017-05-01

    Lock-in vibrothermography is used to characterize vertical kissing and open cracks in metals. In this technique the crack heats up during ultrasound excitation due mainly to friction between the defect's faces. We have solved the inverse problem, consisting in determining the heat source distribution produced at cracks under amplitude modulated ultrasound excitation, which is an ill-posed inverse problem. As a consequence the minimization of the residual is unstable. We have stabilized the algorithm introducing a penalty term based on Total Variation functional. In the inversion, we combine amplitude and phase surface temperature data obtained at several modulation frequencies. Inversions of synthetic data with added noise indicate that compact heat sources are characterized accurately and that the particular upper contours can be retrieved for shallow heat sources. The overall shape of open and homogeneous semicircular strip-shaped heat sources representing open half-penny cracks can also be retrieved but the reconstruction of the deeper end of the heat source loses contrast. Angle-, radius- and depth-dependent inhomogeneous heat flux distributions within these semicircular strips can also be qualitatively characterized. Reconstructions of experimental data taken on samples containing calibrated heat sources confirm the predictions from reconstructions of synthetic data. We also present inversions of experimental data obtained from a real welded Inconel 718 specimen. The results are in good qualitative agreement with the results of liquids penetrants testing.

  7. Monte Carlo calculated TG-60 dosimetry parameters for the {beta}{sup -} emitter {sup 153}Sm brachytherapy source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadeghi, Mahdi; Taghdiri, Fatemeh; Hamed Hosseini, S.

    Purpose: The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for {beta} sources. Radioactive biocompatible and biodegradable {sup 153}Sm glass seed without encapsulation is a {beta}{sup -} emitter radionuclide with a short half-life and delivers a high dose rate to the tumor in the millimeter range. This study presents the results of Monte Carlo calculations of the dosimetric parameters for the {sup 153}Sm brachytherapy source. Methods: Version 5 of the (MCNP) Monte Carlo radiation transport code was used to calculate two-dimensional dose distributions around the source. The dosimetric parameters ofmore » AAPM TG-60 recommendations including the reference dose rate, the radial dose function, the anisotropy function, and the one-dimensional anisotropy function were obtained. Results: The dose rate value at the reference point was estimated to be 9.21{+-}0.6 cGy h{sup -1} {mu}Ci{sup -1}. Due to the low energy beta emitted from {sup 153}Sm sources, the dose fall-off profile is sharper than the other beta emitter sources. The calculated dosimetric parameters in this study are compared to several beta and photon emitting seeds. Conclusions: The results show the advantage of the {sup 153}Sm source in comparison with the other sources because of the rapid dose fall-off of beta ray and high dose rate at the short distances of the seed. The results would be helpful in the development of the radioactive implants using {sup 153}Sm seeds for the brachytherapy treatment.« less

  8. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  9. An Industry Viewpoint on Electron Energy Distribution Function Control

    NASA Astrophysics Data System (ADS)

    Ventzek, Peter

    2011-10-01

    It is trite to note that plasmas play a key role in industrial technology. Lighting, laser, film coating and now medical technology require plasma science for their sustenance. One field stands out by virtue of its economic girth and impact. Semiconductor manufacturing and process science enabling its decades of innovation owe significant debt to progress in low temperature plasma science. Today, technology requires atomic level control from plasmas. Mere layers of atoms delineate good and bad device performance. While plasma sources meet nanoscale specifications over 100s cm scale dimensions, achieving atomic level control from plasmas is hindered by the absence of direct control of species velocity distribution functions. EEDF control translates to precise control of species flux and velocities at surfaces adjacent to the plasma. Electron energy distribution function (eedf) control is a challenge that, if successfully met, will have a huge impact on nanoscale device manufacturing. This lunchtime talk will attempt to provide context to the research advances presented at this Workshop. Touched on will be areas of new opportunity and the risks associated with missing these opportunities.

  10. Scaling to diversity: The DERECHOS distributed infrastructure for analyzing and sharing data

    NASA Astrophysics Data System (ADS)

    Rilee, M. L.; Kuo, K. S.; Clune, T.; Oloso, A.; Brown, P. G.

    2016-12-01

    Integrating Earth Science data from diverse sources such as satellite imagery and simulation output can be expensive and time-consuming, limiting scientific inquiry and the quality of our analyses. Reducing these costs will improve innovation and quality in science. The current Earth Science data infrastructure focuses on downloading data based on requests formed from the search and analysis of associated metadata. And while the data products provided by archives may use the best available data sharing technologies, scientist end-users generally do not have such resources (including staff) available to them. Furthermore, only once an end-user has received the data from multiple diverse sources and has integrated them can the actual analysis and synthesis begin. The cost of getting from idea to where synthesis can start dramatically slows progress. In this presentation we discuss a distributed computational and data storage framework that eliminates much of the aforementioned cost. The SciDB distributed array database is central as it is optimized for scientific computing involving very large arrays, performing better than less specialized frameworks like Spark. Adding spatiotemporal functions to the SciDB creates a powerful platform for analyzing and integrating massive, distributed datasets. SciDB allows Big Earth Data analysis to be performed "in place" without the need for expensive downloads and end-user resources. Spatiotemporal indexing technologies such as the hierarchical triangular mesh enable the compute and storage affinity needed to efficiently perform co-located and conditional analyses minimizing data transfers. These technologies automate the integration of diverse data sources using the framework, a critical step beyond current metadata search and analysis. Instead of downloading data into their idiosyncratic local environments, end-users can generate and share data products integrated from diverse multiple sources using a common shared environment, turning distributed active archive centers (DAACs) from warehouses into distributed active analysis centers.

  11. Continuous description of fluctuating eccentricities

    NASA Astrophysics Data System (ADS)

    Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves

    2014-11-01

    We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.

  12. Field-aligned current sources in the high-latitude ionosphere

    NASA Technical Reports Server (NTRS)

    Barbosa, D. D.

    1979-01-01

    The paper determines the electric potential in a plane which is fed current from a pair of field-aligned current sheets. The ionospheric conductivity is modelled as a constant with an enhanced conductivity annular ring. It is shown that field-aligned current distributions are arbitrary functions of azimuth angle (MLT) and thus allow for asymmetric potential configurations over the pole cap. In addition, ionospheric surface currents are computed by means of stream functions. Finally, the discussion relates these methods to the electrical characteristics of the magnetosphere.

  13. Electric current in a unipolar sunspot with an untwisted field

    NASA Technical Reports Server (NTRS)

    Osherovich, V. A.; Garcia, H. A.

    1990-01-01

    The return flux (RF) sunspot model is applied to a round, unipolar sunspot observed by H. Kawakami (1983). Solving the magnetohydrostatic problem using the gas pressure deficit between the umbral and quiet-sun atmospheres as a source function, a distribution of electric current density in an untwisted, unipolar sunspot as a function of height and radial distance from the sunspot center is observed. Maximum electric current density is about 32 mA/sq m at the bottom of the sunspot.

  14. Numerical modeling of heat transfer in the fuel oil storage tank at thermal power plant

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Svetlana A.

    2015-01-01

    Presents results of mathematical modeling of convection of a viscous incompressible fluid in a rectangular cavity with conducting walls of finite thickness in the presence of a local source of heat in the bottom of the field in terms of convective heat exchange with the environment. A mathematical model is formulated in terms of dimensionless variables "stream function - vorticity vector speed - temperature" in the Cartesian coordinate system. As the results show the distributions of hydrodynamic parameters and temperatures using different boundary conditions on the local heat source.

  15. Role of polysaccharides in food, digestion, and health

    PubMed Central

    Lovegrove, A.; Edwards, C. H.; De Noni, I.; Patel, H.; El, S. N.; Grassby, T.; Zielke, C.; Ulmius, M.; Nilsson, L.; Butterworth, P. J.; Ellis, P. R; Shewry, P. R.

    2017-01-01

    ABSTRACT Polysaccharides derived from plant foods are major components of the human diet, with limited contributions of related components from fungal and algal sources. In particular, starch and other storage carbohydrates are the major sources of energy in all diets, while cell wall polysaccharides are the major components of dietary fiber. We review the role of these components in the human diet, including their structure and distribution, their modification during food processing and effects on functional properties, their behavior in the gastrointestinal tract, and their contribution to healthy diets. PMID:25921546

  16. Role of polysaccharides in food, digestion, and health.

    PubMed

    Lovegrove, A; Edwards, C H; De Noni, I; Patel, H; El, S N; Grassby, T; Zielke, C; Ulmius, M; Nilsson, L; Butterworth, P J; Ellis, P R; Shewry, P R

    2017-01-22

    Polysaccharides derived from plant foods are major components of the human diet, with limited contributions of related components from fungal and algal sources. In particular, starch and other storage carbohydrates are the major sources of energy in all diets, while cell wall polysaccharides are the major components of dietary fiber. We review the role of these components in the human diet, including their structure and distribution, their modification during food processing and effects on functional properties, their behavior in the gastrointestinal tract, and their contribution to healthy diets.

  17. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  18. Joint Blind Source Separation by Multi-set Canonical Correlation Analysis

    PubMed Central

    Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D

    2009-01-01

    In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319

  19. How Different EEG References Influence Sensor Level Functional Connectivity Graphs

    PubMed Central

    Huang, Yunzhi; Zhang, Junpeng; Cui, Yuan; Yang, Gang; He, Ling; Liu, Qi; Yin, Guangfu

    2017-01-01

    Highlights: Hamming Distance is applied to distinguish the difference of functional connectivity networkThe orientations of sources are testified to influence the scalp Functional Connectivity Graph (FCG) from different references significantlyREST, the reference electrode standardization technique, is proved to have an overall stable and excellent performance in variable situations. The choice of an electroencephalograph (EEG) reference is a practical issue for the study of brain functional connectivity. To study how EEG reference influence functional connectivity estimation (FCE), this study compares the differences of FCE resulting from the different references such as REST (the reference electrode standardization technique), average reference (AR), linked mastoids (LM), and left mastoid references (LR). Simulations involve two parts. One is based on 300 dipolar pairs, which are located on the superficial cortex with a radial source direction. The other part is based on 20 dipolar pairs. In each pair, the dipoles have various orientation combinations. The relative error (RE) and Hamming distance (HD) between functional connectivity matrices of ideal recordings and that of recordings obtained with different references, are metrics to compare the differences of the scalp functional connectivity graph (FCG) derived from those two kinds of recordings. Lower RE and HD values imply more similarity between the two FCGs. Using the ideal recording (IR) as a standard, the results show that AR, LM and LR perform well only in specific conditions, i.e., AR performs stable when there is no upward component in sources' orientation. LR achieves desirable results when the sources' locations are away from left ear. LM achieves an indistinct difference with IR, i.e., when the distribution of source locations is symmetric along the line linking the two ears. However, REST not only achieves excellent performance for superficial and radial dipolar sources, but also achieves a stable and robust performance with variable source locations and orientations. Benefitting from the stable and robust performance of REST vs. other reference methods, REST might best recover the real FCG of EEG. Thus, REST based FCG may be a good candidate to compare the FCG of EEG based on different references from different labs. PMID:28725175

  20. Kappa-Electrons Downstream of the Solar Wind Termination Shock

    NASA Astrophysics Data System (ADS)

    Fahr, H. J.

    2017-12-01

    A theoretical description of the solar wind electron distribution function downstream of the termination shock under the influence of the shock-induced injection of overshooting KeV-energetic electrons will be presented. A kinetic phasespace transport equation in the bulk frame of the heliosheath plasma flow is developed for the solar wind electrons, taking into account shock-induced electron injection, convective changes, magnetic cooling processes and whistler wave-induced energy diffusion. Assuming that the local electron distribution under the prevailing Non-LTE conditions can be represented by a local kappa function with a local kappa parameter that varies with the streamline coordinates, we determine the parameters of the resulting, initial kappa distribution for the downstream electrons. From this initial function spectral electron fluxes can be derived and can be compared with those measured by the VOYAGER-1 spacecraft in the range between 40 to 70 KeV. It can then be shown that with kappa values around kappa = 6 one can in fact fit these data very satisfactorily. In addition it is shown that for isentropic electron flows kappa-distributed electrons have to undergo simultaneous changes of both parameters, i.e. kappa and theta, of the electron kappa function. It is also shown then that under the influence of energy sinks and sources the electron flux becomes non-isentropic with electron entropies changing along the streamline.

  1. Molecular fingerprinting of particulate organic matter as a new tool for its source apportionment: changes along a headwater drainage in coarse, medium and fine particles as a function of rainfalls

    NASA Astrophysics Data System (ADS)

    Jeanneau, Laurent; Rowland, Richard; Inamdar, Shreeram

    2018-02-01

    Tracking the sources of particulate organic matter (POM) exported from catchments is important to understand the transfer of energy from soils to oceans. The suitability of investigating the molecular composition of POM by thermally assisted hydrolysis and methylation using tetramethylammonium hydroxide directly coupled to gas chromatography and mass spectrometry is presented. The results of this molecular-fingerprint approach were compared with previously published elemental (% C, % N) and isotopic data (δ13C, δ15N) acquired in a nested headwater catchment in the Piedmont region, eastern United States of America (12 and 79 ha). The concordance between these results highlights the effectiveness of this molecular tool as a valuable method for source fingerprinting of POM. It emphasizes litter as the main source of exported POM at the upstream location (80±14 %), with an increasing proportion of streambed (SBed) sediment remobilization downstream (42 ± 29 %), specifically during events characterized by high rainfall amounts. At the upstream location, the source of POM seems to be controlled by the maximum and median hourly rainfall intensity. An added value of this method is to directly investigate chemical biomarkers and to mine their distributions in terms of biogeochemical functioning of an ecosystem. In this catchment, the distribution of plant-derived biomarkers characterizing lignin, cutin and suberin inputs were similar in SBed and litter, while the proportion of microbial markers was 4 times higher in SBed than in litter. These results indicate that SBed OM was largely from plant litter that has been processed by the aquatic microbial community.

  2. Kinematics, influence functions and field quantities for disturbance propagation from moving disturbance sources

    NASA Technical Reports Server (NTRS)

    Das, A.

    1984-01-01

    A unified method is presented for deriving the influence functions of moving singularities which determine the field quantities in aerodynamics and aeroacoustics. The moving singularities comprise volume and surface distributions having arbitrary orientations in space and to the trajectory. Hence one generally valid formula for the influence functions which reveal some universal relationships and remarkable properties in the disturbance fields. The derivations used are completely consistent with the physical processes in the propagation field, such that treatment renders new descriptions for some standard concepts. The treatment is uniformly valid for subsonic and supersonic Mach numbers.

  3. Doppler interpretation of quasar red shifts.

    PubMed

    Zapolsky, H S

    1966-08-05

    The hypothesis that the quasistellar sources (quasars) are local objects moving with velocities close to the speed of light is examined. Provided there is no observational cutoff on apparent bolometric magnitude for the quasars, the transverse Doppler effect leads to the expectation of fewer blue shifts than red shifts for an isotropic distribution of velocities. Such a distribution also yields a function N(z), the number of objects with red shift less than z which is not inconsistent with the present data. On the basis of two extreme assumptions concerning the origin of such rapidly moving sources, we computed curves of red shift plotted against magnitude. In particular, the curve obtained on the assumption that the quasars originated from an explosion in or nearby our own galaxy is in as good agreement with the observations as the curve of cosmological red shift plotted against magnitude.

  4. Venus ionosphere: photochemical and thermal diffusion control of ion composition.

    PubMed

    Bauer, S J; Donahue, T M; Hartle, R E; Taylor, H A

    1979-07-06

    The major photochemical sources and sinks for ten of the ions measured by the ion mass spectrometer on the Pioneer Venus bus and orbiter spacecraft that are consistent with the neutral gas composition measured on the same spacecraft have been identified. The neutral gas temperature (Tn) as a function of solar zenith angle (chi) derived from measured ion distributions in photochemical equilibrium is given by Tn (K) = 323 cos(1/5)chi. Above 200 kilometers, the altitude behavior of ions is generally controlled by plasma diffusion, with important modifications for minor ions due to thermal diffusion resulting from the observed gradients of plasma temperatures. The dayside equilibrium distributions of ions are sometimes perturbed by plasma convection, while lateral transport of ions from the dayside seems to be a major source of the nightside ionosphere.

  5. Slope failure as an upslope source of stream wood

    Treesearch

    Daniel Miller

    2013-01-01

    Large woody debris is recognized as an important component of stream geomorphology and stream ecosystem function, and forest-land management is recognized as an important control on the quantity (and size and species distributions) of wood available for recruitment to streams. Much of the wood present in streams comes from adjacent forests, and riparian management...

  6. On the sources of vegetation activity variation, and their relation with water balance in Mexico

    Treesearch

    F. Mora; L.R. Iverson

    1998-01-01

    Natural landscape surface processes are largely controlled by the relationship between climate and vegetation. Water balance integrates the effects of climate on patterns of vegetation distribution and productivity, and for that season, functional relationships can be established using water balance variables as predictors of vegetation response. In this study, we...

  7. Bit-Wise Arithmetic Coding For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  8. High-frequency modulation of ion-acoustic waves.

    NASA Technical Reports Server (NTRS)

    Albright, N. W.

    1972-01-01

    A large amplitude, high-frequency electromagnetic oscillation is impressed on a nonrelativistic, collisionless plasma from an external source. The frequency is chosen to be far from the plasma frequency (in fact, lower). The resulting electron velocity distribution function strongly modifies the propagation of ion-acoustic waves parallel to the oscillating electric field. The complex frequency is calculated numerically.

  9. Prostate cancer and industrial pollution Risk around putative focus in a multi-source scenario.

    PubMed

    Ramis, Rebeca; Diggle, Peter; Cambra, Koldo; López-Abente, Gonzalo

    2011-04-01

    Prostate cancer is the second most common type of cancer among men but its aetiology is still largely unknown. Different studies have proposed several risk factors such as ethnic origin, age, genetic factors, hormonal factors, diet and insulin-like growth factor, but the spatial distribution of the disease suggests that other environmental factors are involved. This paper studies the spatial distribution of prostate cancer mortality in an industrialized area using distances from each of a number of industrial facilities as indirect measures of exposure to industrial pollution. We studied the Gran Bilbao area (Spain) with a population of 791,519 inhabitants distributed in 657 census tracts. There were 20 industrial facilities within the area, 8 of them in the central axis of the region. We analysed prostate cancer mortality during the period 1996-2003. There were 883 deaths giving a crude rate of 14 per 100,000 inhabitants. We extended the standard Poisson regression model by the inclusion of a multiplicative non-linear function to model the effect of distance from an industrial facility. The function's shape combined an elevated risk close to the source with a neutral effect at large distance. We also included socio-demographic covariates in the model to control potential confounding. We aggregated the industrial facilities by sector: metal, mineral, chemical and other activities. Results relating to metal industries showed a significantly elevated risk by a factor of approximately 1.4 in the immediate vicinity, decaying with distance to a value of 1.08 at 12km. The remaining sectors did not show a statistically significant excess of risk at the source. Notwithstanding the limitations of this kind of study, we found evidence of association between the spatial distribution of prostate cancer mortality aggregated by census tracts and proximity to metal industrial facilities located within the area, after adjusting for socio-demographic characteristics at municipality level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. A fast and robust method for moment tensor and depth determination of shallow seismic events in CTBT related studies.

    NASA Astrophysics Data System (ADS)

    Baker, Ben; Stachnik, Joshua; Rozhkov, Mikhail

    2017-04-01

    International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event according to the protocol to the Protocol to the Comprehensive Nuclear Test Ban Treaty. Determination of seismic event source mechanism and its depth is closely related to these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. In this presentation we demonstrate preliminary results obtained with the latter approach from an improved software design. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Posterior distributions of moment tensor parameters show narrow peaks where a significant number of reliable surface wave observations are available. For earthquake examples, fault orientation (strike, dip, and rake) posterior distributions also provide results consistent with published catalogues. Inclusion of observations on horizontal components will provide further constraints. In addition, the calculation of teleseismic P wave Green's Functions are improved through prior analysis to determine an appropriate attenuation parameter for each source-receiver path. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK events and shallow earthquakes using a new implementation of teleseismic P waves waveform fitting. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.

  11. Changes in bacterial composition of biofilm in a metropolitan drinking water distribution system.

    PubMed

    Revetta, R P; Gomez-Alvarez, V; Gerke, T L; Santo Domingo, J W; Ashbolt, N J

    2016-07-01

    This study examined the development of bacterial biofilms within a metropolitan distribution system. The distribution system is fed with different source water (i.e. groundwater, GW and surface water, SW) and undergoes different treatment processes in separate facilities. The biofilm community was characterized using 16S rRNA gene clone libraries and functional potential analysis, generated from total DNA extracted from coupons in biofilm annular reactors fed with onsite drinking water for up to 18 months. Differences in the bacterial community structure were observed between GW and SW. Representatives that explained the dissimilarity were associated with the classes Betaproteobacteria, Alphaproteobacteria, Actinobacteria, Gammaproteobacteria and Firmicutes. After 9 months the biofilm bacterial community from both GW and SW were dominated by Mycobacterium species. The distribution of the dominant operational taxonomic unit (OTU) (Mycobacterium) positively correlated with the drinking water distribution system (DWDS) temperature. In this study, the biofilm community structure observed between GW and SW were dissimilar, while communities from different locations receiving SW did not show significant differences. The results suggest that source water and/or the water quality shaped by their respective treatment processes may play an important role in shaping the bacterial communities in the distribution system. In addition, several bacterial groups were present in all samples, suggesting that they are an integral part of the core microbiota of this DWDS. These results provide an ecological insight into biofilm bacterial structure in chlorine-treated drinking water influenced by different water sources and their respective treatment processes. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  12. A 3D tomographic reconstruction method to analyze Jupiter's electron-belt emission observations

    NASA Astrophysics Data System (ADS)

    Santos-Costa, Daniel; Girard, Julien; Tasse, Cyril; Zarka, Philippe; Kita, Hajime; Tsuchiya, Fuminori; Misawa, Hiroaki; Clark, George; Bagenal, Fran; Imai, Masafumi; Becker, Heidi N.; Janssen, Michael A.; Bolton, Scott J.; Levin, Steve M.; Connerney, John E. P.

    2017-04-01

    Multi-dimensional reconstruction techniques of Jupiter's synchrotron radiation from radio-interferometric observations were first developed by Sault et al. [Astron. Astrophys., 324, 1190-1196, 1997]. The tomographic-like technique introduced 20 years ago had permitted the first 3-dimensional mapping of the brightness distribution around the planet. This technique has demonstrated the advantage to be weakly dependent on planetary field models. It also does not require any knowledge on the energy and spatial distributions of the radiating electrons. On the downside, it is assumed that the volume emissivity of any punctual point source around the planet is isotropic. This assumption becomes incorrect when mapping the brightness distribution for non-equatorial point sources or any point sources from Juno's perspective. In this paper, we present our modeling effort to bypass the isotropy issue. Our approach is to use radio-interferometric observations and determine the 3-D brightness distribution in a cylindrical coordinate system. For each set (z, r), we constrain the longitudinal distribution with a Fourier series and the anisotropy is addressed with a simple periodic function when possible. We develop this new method over a wide range of frequencies using past VLA and LOFAR observations of Jupiter. We plan to test this reconstruction method with observations of Jupiter that are currently being carried out with LOFAR and GMRT in support to the Juno mission. We describe how this new 3D tomographic reconstruction method provides new model constraints on the energy and spatial distributions of Jupiter's ultra-relativistic electrons close to the planet and be used to interpret Juno MWR observations of Jupiter's electron-belt emission and assist in evaluating the background noise from the radiation environment in the atmospheric measurements.

  13. 78 FR 56685 - SourceGas Distribution LLC; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. CP13-540-000] SourceGas Distribution LLC; Notice of Application Take notice that on August 27, 2013, SourceGas Distribution LLC (Source... areas across the Nebraska-Colorado border within which SourceGas may, without further commission...

  14. Recent advancements in the SQUID magnetospinogram system

    NASA Astrophysics Data System (ADS)

    Adachi, Yoshiaki; Kawai, Jun; Haruta, Yasuhiro; Miyamoto, Masakazu; Kawabata, Shigenori; Sekihara, Kensuke; Uehara, Gen

    2017-06-01

    In this study, a new superconducting quantum interference device (SQUID) biomagnetic measurement system known as magnetospinogram (MSG) is developed. The MSG system is used for observation of a weak magnetic field distribution induced by the neural activity of the spinal cord over the body surface. The current source reconstruction for the observed magnetic field distribution provides noninvasive functional imaging of the spinal cord, which enables medical personnel to diagnose spinal cord diseases more accurately. The MSG system is equipped with a uniquely shaped cryostat and a sensor array of vector-type SQUID gradiometers that are designed to detect the magnetic field from deep sources across a narrow observation area over the body surface of supine subjects. The latest prototype of the MSG system is already applied in clinical studies to develop a diagnosis protocol for spinal cord diseases. Advancements in hardware and software for MSG signal processing and cryogenic components aid in effectively suppressing external magnetic field noise and reducing the cost of liquid helium that act as barriers with respect to the introduction of the MSG system to hospitals. The application of the MSG system is extended to various biomagnetic applications in addition to spinal cord functional imaging given the advantages of the MSG system for investigating deep sources. The study also includes a report on the recent advancements of the SQUID MSG system including its peripheral technologies and wide-spread applications.

  15. Systematic Variability of the He+ Pickup Ion Velocity Distribution Function Observed with SOHO/CELIAS/CTOF

    NASA Astrophysics Data System (ADS)

    Taut, Andreas; Drews, Christian; Berger, Lars; Wimmer-Schweingruber, Robert

    2016-04-01

    The 1D Velocity Distribution Function (VDF) of He+ pickup ions shows two distinct populations that reflect the sources of these ions. The highly suprathermal population is the result of the ionization and pickup of almost resting interstellar neutrals that are injected into the solar wind as a highly anisotropic torus distribution. The nearly thermalized population is centered around the solar wind bulk speed and is mainly attributed to inner-source pickup ions that originate in the inner heliosphere. Current pickup ion models assume a rapid isotropization of the initial VDF by resonant wave-particle interactions, but recent observations by Drews et al. (2015) of a torus-like VDF strongly limit this isotropization. This in turn means that more observational data is needed to further characterize the kinetic behavior of pickup ions. The Charge-Time-Of-Flight sensor on-board SOHO offers unrivaled counting statistics for He+ and a sufficient mass-per-charge resolution. Thus, the He+ VDF can be observed on comparatively short timescales. We combine this data with the magnetic field data from WIND via an extrapolation to the location of SOHO. On the one hand we investigate the 1D VDF of He+ pickup ions with respect to different magnetic field orientations. Our findings complement on previous studies with other instruments that show an anisotropy of the VDF that is linked to the initial torus VDF. On the other hand we find a significant modification of the VDF during stream-interaction region. This may be linked to a different cooling behaviour in these regions and/or the absence of inner-source He+ during these times. Here, we report on our preliminary results.

  16. Magnetoacoustic tomography with magnetic induction for high-resolution bioimepedance imaging through vector source reconstruction under the static field of MRI magnet.

    PubMed

    Mariappan, Leo; Hu, Gang; He, Bin

    2014-02-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼ 1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction.

  17. Generalisation of the identity method for determination of high-order moments of multiplicity distributions with a software implementation

    NASA Astrophysics Data System (ADS)

    Maćkowiak-Pawłowska, Maja; Przybyła, Piotr

    2018-05-01

    The incomplete particle identification limits the experimentally-available phase space region for identified particle analysis. This problem affects ongoing fluctuation and correlation studies including the search for the critical point of strongly interacting matter performed on SPS and RHIC accelerators. In this paper we provide a procedure to obtain nth order moments of the multiplicity distribution using the identity method, generalising previously published solutions for n=2 and n=3. Moreover, we present an open source software implementation of this computation, called Idhim, that allows one to obtain the true moments of identified particle multiplicity distributions from the measured ones provided the response function of the detector is known.

  18. IEEE 1547 Standards Advancing Grid Modernization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basso, Thomas; Chakraborty, Sudipta; Hoke, Andy

    Technology advances including development of advanced distributed energy resources (DER) and grid-integrated operations and controls functionalities have surpassed the requirements in current standards and codes for DER interconnection with the distribution grid. The full revision of IEEE Standards 1547 (requirements for DER-grid interconnection and interoperability) and 1547.1 (test procedures for conformance to 1547) are establishing requirements and best practices for state-of-the-art DER including variable renewable energy sources. The revised standards will also address challenges associated with interoperability and transmission-level effects, in addition to strictly addressing the distribution grid needs. This paper provides the status and future direction of the ongoingmore » development focus for the 1547 standards.« less

  19. Results on angular distributions of thermal dileptons in nuclear collisions

    NASA Astrophysics Data System (ADS)

    Usai, Gianluca; NA60 Collaboration

    2009-11-01

    The NA60 experiment at the CERN SPS has studied dimuon production in 158 AGeV In-In collisions. The strong pair excess above the known sources found in the mass region 0.2

  20. First Results on Angular Distributions of Thermal Dileptons in Nuclear Collisions

    NASA Astrophysics Data System (ADS)

    Arnaldi, R.; Banicz, K.; Castor, J.; Chaurand, B.; Cicalò, C.; Colla, A.; Cortese, P.; Damjanovic, S.; David, A.; de Falco, A.; Devaux, A.; Ducroux, L.; En'Yo, H.; Fargeix, J.; Ferretti, A.; Floris, M.; Förster, A.; Force, P.; Guettet, N.; Guichard, A.; Gulkanian, H.; Heuser, J. M.; Keil, M.; Kluberg, L.; Lourenço, C.; Lozano, J.; Manso, F.; Martins, P.; Masoni, A.; Neves, A.; Ohnishi, H.; Oppedisano, C.; Parracho, P.; Pillot, P.; Poghosyan, T.; Puddu, G.; Radermacher, E.; Ramalhete, P.; Rosinsky, P.; Scomparin, E.; Seixas, J.; Serci, S.; Shahoyan, R.; Sonderegger, P.; Specht, H. J.; Tieulent, R.; Usai, G.; Veenhof, R.; Wöhri, H. K.

    2009-06-01

    The NA60 experiment at the CERN Super Proton Synchrotron has studied dimuon production in 158AGeV In-In collisions. The strong excess of pairs above the known sources found in the complete mass region 0.2

  1. Method and system for determining depth distribution of radiation-emitting material located in a source medium and radiation detector system for use therein

    DOEpatents

    Benke, Roland R.; Kearfott, Kimberlee J.; McGregor, Douglas S.

    2003-03-04

    A method, system and a radiation detector system for use therein are provided for determining the depth distribution of radiation-emitting material distributed in a source medium, such as a contaminated field, without the need to take samples, such as extensive soil samples, to determine the depth distribution. The system includes a portable detector assembly with an x-ray or gamma-ray detector having a detector axis for detecting the emitted radiation. The radiation may be naturally-emitted by the material, such as gamma-ray-emitting radionuclides, or emitted when the material is struck by other radiation. The assembly also includes a hollow collimator in which the detector is positioned. The collimator causes the emitted radiation to bend toward the detector as rays parallel to the detector axis of the detector. The collimator may be a hollow cylinder positioned so that its central axis is perpendicular to the upper surface of the large area source when positioned thereon. The collimator allows the detector to angularly sample the emitted radiation over many ranges of polar angles. This is done by forming the collimator as a single adjustable collimator or a set of collimator pieces having various possible configurations when connected together. In any one configuration, the collimator allows the detector to detect only the radiation emitted from a selected range of polar angles measured from the detector axis. Adjustment of the collimator or the detector therein enables the detector to detect radiation emitted from a different range of polar angles. The system further includes a signal processor for processing the signals from the detector wherein signals obtained from different ranges of polar angles are processed together to obtain a reconstruction of the radiation-emitting material as a function of depth, assuming, but not limited to, a spatially-uniform depth distribution of the material within each layer. The detector system includes detectors having different properties (sensitivity, energy resolution) which are combined so that excellent spectral information may be obtained along with good determinations of the radiation field as a function of position.

  2. Development of High-Resolution Dynamic Dust Source Function - A Case Study with a Strong Dust Storm in a Regional Model

    NASA Technical Reports Server (NTRS)

    Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul

    2017-01-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 0203 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  3. Development of High-Resolution Dynamic Dust Source Function -A Case Study with a Strong Dust Storm in a Regional Model

    PubMed Central

    Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul

    2018-01-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events. PMID:29632432

  4. Development of High-Resolution Dynamic Dust Source Function -A Case Study with a Strong Dust Storm in a Regional Model.

    PubMed

    Kim, Dongchul; Chin, Mian; Kemp, Eric M; Tao, Zhining; Peters-Lidard, Christa D; Ginoux, Paul

    2017-06-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  5. A hybrid model of biased inductively coupled discharges1

    NASA Astrophysics Data System (ADS)

    Wen, Deqi; Lieberman, Michael A.; Zhang, Quanzhi; Liu, Yongxin; Wang, Younian

    2016-09-01

    A hybrid model, i.e. a global model coupled bidirectionally with a parallel Monte-Carlo collision (MCC) sheath model, is developed to investigate an inductively coupled discharge with a bias source. To validate this model, both bulk plasma density and ion energy distribution functions (IEDFs) are compared with experimental measurements in an argon discharge, and a good agreement is obtained. On this basis, the model is extended to weakly electronegative Ar/O2 plasma. The ion energy and angular distribution functions versus bias voltage amplitude are examined. The different ion species (Ar+, O2+,O+) have various behaviors because of the different masses. A low bias voltage, Ar+ has a single energy peak distribution and O+ has a bimodal distribution. At high bias voltage, the energy peak separation of O+ is wider than Ar+. 1This work has been supported by the National Nature Science Foundation of China (Grant No. 11335004) and Specific project (Grant No 2011X02403-001) and partially supported by Department of Energy Office of Fusion Energy Science Contract DE-SC000193 and a gift from the Lam Research Corporation.

  6. Isotopic dependence of the fragments' internal temperatures determined from multifragment emission

    NASA Astrophysics Data System (ADS)

    Souza, S. R.; Donangelo, R.

    2018-05-01

    The internal temperatures of fragments produced by an excited nuclear source are investigated by using the microcanonical version of the statistical multifragmentation model, with discrete energy. We focus on the fragments' properties at the breakup stage, before they have time to deexcite by particle emission. Since the adopted model provides the excitation energy distribution of these primordial fragments, it allows one to calculate the temperatures of different isotope families and to make inferences about the sensitivity to their isospin composition. It is found that, due to the functional form of the nuclear density of states and the excitation energy distribution of the fragments, proton-rich isotopes are hotter than neutron-rich isotopes. This property has been taken to be an indication of earlier emission of the former from a source that cools down as it expands and emits fragments. Although this scenario is incompatible with the prompt breakup of a thermally equilibrated source, our results reveal that the latter framework also provides the same qualitative features just mentioned. Therefore they suggest that this property cannot be taken as evidence for nonequilibrium emission. We also found that this sensitivity to the isotopic composition of the fragments depends on the isospin composition of the source, and that it is weakened as the excitation energy of the source increases.

  7. New reversing freeform lens design method for LED uniform illumination with extended source and near field

    NASA Astrophysics Data System (ADS)

    Zhao, Zhili; Zhang, Honghai; Zheng, Huai; Liu, Sheng

    2018-03-01

    In light-emitting diode (LED) array illumination (e.g. LED backlighting), obtainment of high uniformity in the harsh condition of the large distance height ratio (DHR), extended source and near field is a key as well as challenging issue. In this study, we present a new reversing freeform lens design algorithm based on the illuminance distribution function (IDF) instead of the traditional light intensity distribution, which allows uniform LED illumination in the above mentioned harsh conditions. IDF of freeform lens can be obtained by the proposed mathematical method, considering the effects of large DHR, extended source and near field target at the same time. In order to prove the claims, a slim direct-lit LED backlighting with DHR equal to 4 is designed. In comparison with the traditional lenses, illuminance uniformity of LED backlighting with the new lens increases significantly from 0.45 to 0.84, and CV(RMSE) decreases dramatically from 0.24 to 0.03 in the harsh condition. Meanwhile, luminance uniformity of LED backlighting with the new lens is obtained as high as 0.92 at the condition of extended source and near field. This new method provides a practical and effective way to solve the problem of large DHR, extended source and near field for LED array illumination.

  8. The proton and helium anomalies in the light of the Myriad model

    NASA Astrophysics Data System (ADS)

    Salati, Pierre; Génolini, Yoann; Serpico, Pasquale; Taillet, Richard

    2017-03-01

    A hardening of the proton and helium fluxes is observed above a few hundreds of GeV/nuc. The distribution of local sources of primary cosmic rays has been suggested as a potential solution to this puzzling behavior. Some authors even claim that a single source is responsible for the observed anomalies. But how probable these explanations are? To answer that question, our current description of cosmic ray Galactic propagation needs to be replaced by the Myriad model. In the former approach, sources of protons and helium nuclei are treated as a jelly continuously spread over space and time. A more accurate description is provided by the Myriad model where sources are considered as point-like events. This leads to a probabilistic derivation of the fluxes of primary species, and opens the possibility that larger-than-average values may be observed at the Earth. For a long time though, a major obstacle has been the infinite variance associated to the probability distribution function which the fluxes follow. Several suggestions have been made to cure this problem but none is entirely satisfactory. We go a step further here and solve the infinite variance problem of the Myriad model by making use of the generalized central limit theorem. We find that primary fluxes are distributed according to a stable law with heavy tail, well-known to financial analysts. The probability that the proton and helium anomalies are sourced by local SNR can then be calculated. The p-values associated to the CREAM measurements turn out to be small, unless somewhat unrealistic propagation parameters are assumed.

  9. Source modeling and inversion with near real-time GPS: a GITEWS perspective for Indonesia

    NASA Astrophysics Data System (ADS)

    Babeyko, A. Y.; Hoechner, A.; Sobolev, S. V.

    2010-07-01

    We present the GITEWS approach to source modeling for the tsunami early warning in Indonesia. Near-field tsunami implies special requirements to both warning time and details of source characterization. To meet these requirements, we employ geophysical and geological information to predefine a maximum number of rupture parameters. We discretize the tsunamigenic Sunda plate interface into an ordered grid of patches (150×25) and employ the concept of Green's functions for forward and inverse rupture modeling. Rupture Generator, a forward modeling tool, additionally employs different scaling laws and slip shape functions to construct physically reasonable source models using basic seismic information only (magnitude and epicenter location). GITEWS runs a library of semi- and fully-synthetic scenarios to be extensively employed by system testing as well as by warning center personnel teaching and training. Near real-time GPS observations are a very valuable complement to the local tsunami warning system. Their inversion provides quick (within a few minutes on an event) estimation of the earthquake magnitude, rupture position and, in case of sufficient station coverage, details of slip distribution.

  10. Starburst galaxies

    NASA Technical Reports Server (NTRS)

    Weedman, Daniel W.

    1987-01-01

    The infrared properties of star-forming galaxies, primarily as determined by the Infrared Astronomy Satellite (IRAS), are compared to X-ray, optical, and radio properties. Luminosity functions are reviewed and combined with those derived from optically discovered samples using 487 Markarian galaxies with redshifts and published IRAS 60 micron fluxes, and 1074 such galaxies in the Center for Astrophysics redshift survey. It is found that the majority of infrared galaxies which could be detected are low luminosity sources already known from the optical samples, but non-infrared surveys have found only a very small fraction of the highest luminosity sources. Distributions of infrared to optical fluxes and available spectra indicate that the majority of IRAS-selected galaxies are starburst galaxies. Having a census of starburst galaxies and associated dust allow severl important global calculations. The source counts are predicted as a function of flux limits for both infrared and radio fluxes. These galaxies are found to be important radio sources at faint flux limits. Taking the integrated flux to z = 3 indicates that such galaxies are a significant component of the diffuse X-ray background, and could be the the dominant component depending on the nature of the X-ray spectra and source evolution.

  11. TRIQS: A toolbox for research on interacting quantum systems

    NASA Astrophysics Data System (ADS)

    Parcollet, Olivier; Ferrero, Michel; Ayral, Thomas; Hafermann, Hartmut; Krivenko, Igor; Messio, Laura; Seth, Priyanka

    2015-11-01

    We present the TRIQS library, a Toolbox for Research on Interacting Quantum Systems. It is an open-source, computational physics library providing a framework for the quick development of applications in the field of many-body quantum physics, and in particular, strongly-correlated electronic systems. It supplies components to develop codes in a modern, concise and efficient way: e.g. Green's function containers, a generic Monte Carlo class, and simple interfaces to HDF5. TRIQS is a C++/Python library that can be used from either language. It is distributed under the GNU General Public License (GPLv3). State-of-the-art applications based on the library, such as modern quantum many-body solvers and interfaces between density-functional-theory codes and dynamical mean-field theory (DMFT) codes are distributed along with it.

  12. Nonthermal electrons in the thick-target reverse-current model for hard X-ray bremsstrahlung

    NASA Astrophysics Data System (ADS)

    Litvinenko, Iu. E.; Somov, B. V.

    1991-02-01

    The behavior of the accelerated electrons escaping from a high-temperature source of primary energy in a solar flare is investigated. The direct current of fast electrons is supposed to be balanced by the reverse current of thermal electrons in the ambient colder plasma inside flare loops. The self-consistent kinetic problem is formulated, and the reverse-current electric field and the fast electron distribution function are found from its solution. The X-ray bremsstrahlung polarization is then calculated from the distribution function. The difference of results from those in the case of thermal runaway electrons (Diakonov and Somov, 1988) is discussed. The solutions with and without an account taken of the effect of a reverse-current electric field are also compared.

  13. Research on illumination uniformity of high-power LED array light source

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolong; Wei, Xueye; Zhang, Ou; Zhang, Xinwei

    2018-06-01

    Uniform illumination is one of the most important problem that must be solved in the application of high-power LED array. A numerical optimization algorithm, is applied to obtain the best LED array typesetting so that the light intensity of the target surface is evenly distributed. An evaluation function is set up through the standard deviation of the illuminance function, then the particle swarm optimization algorithm is utilized to optimize different arrays. Furthermore, the light intensity distribution is obtained by optical ray tracing method. Finally, a hybrid array is designed and the optical ray tracing method is applied to simulate the array. The simulation results, which is consistent with the traditional theoretical calculation, show that the algorithm introduced in this paper is reasonable and effective.

  14. Advanced capabilities for materials modelling with Quantum ESPRESSO

    NASA Astrophysics Data System (ADS)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  15. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S

    2017-10-24

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  16. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  17. Acceleration of O+ from the cusp to the plasma sheet

    NASA Astrophysics Data System (ADS)

    Liao, J.; Kistler, L. M.; Mouikis, C. G.; Klecker, B.; Dandouras, I.

    2015-02-01

    Heavy ions from the ionosphere that are accelerated in the cusp/cleft have been identified as a direct source for the hot plasma in the plasma sheet. However, the details of the acceleration and transport that transforms the originally cold ions into the hot plasma sheet population are not fully understood. The polar orbit of the Cluster satellites covers the main transport path of the O+ from the cusp to the plasma sheet, so Cluster is ideal for tracking its velocity changes. However, because the cusp outflow is dispersed according to its velocity as it is transported to the tail, due to the velocity filter effect, the observed changes in beam velocity over the Cluster orbit may simply be the result of the spacecraft accessing different spatial regions and not necessarily evidence of acceleration. Using the Cluster Ion Spectrometry/Composition Distribution Function instrument onboard Cluster, we compare the distribution function of streaming O+ in the tail lobes with the initial distribution function observed over the cusp and reveal that the observations of energetic streaming O+ in the lobes around -20 RE are predominantly due to the velocity filter effect during nonstorm times. During storm times, the cusp distribution is further accelerated. In the plasma sheet boundary layer, however, the average O+ distribution function is above the upper range of the outflow distributions at the same velocity during both storm and nonstorm times, indicating that acceleration has taken place. Some of the velocity increase is in the direction perpendicular to the magnetic field, indicating that the E × B velocity is enhanced. However, there is also an increase in the parallel direction, which could be due to nonadiabatic acceleration at the boundary or wave heating.

  18. Coherence-length-gated distributed optical fiber sensing based on microwave-photonic interferometry.

    PubMed

    Hua, Liwei; Song, Yang; Cheng, Baokai; Zhu, Wenge; Zhang, Qi; Xiao, Hai

    2017-12-11

    This paper presents a new optical fiber distributed sensing concept based on coherent microwave-photonics interferometry (CMPI), which uses a microwave modulated coherent light source to interrogate cascaded interferometers for distributed measurement. By scanning the microwave frequencies, the complex microwave spectrum is obtained and converted to time domain signals at known locations by complex Fourier transform. The amplitudes of these time domain pulses are a function of the optical path differences (OPDs) of the distributed interferometers. Cascaded fiber Fabry-Perot interferometers (FPIs) fabricated by femtosecond laser micromachining were used to demonstrate the concept. The experimental results indicated that the strain measurement resolution can be better than 0.6 µε using a FPI with a cavity length of 1.5 cm. Further improvement of the strain resolution to the nε level is achievable by increasing the cavity length of the FPI to over 1m. The tradeoff between the sensitivity and dynamic range was also analyzed in detail. To minimize the optical power instability (either from the light source or the fiber loss) induced errors, a single reflector was added in front of an individual FPI as an optical power reference for the purpose of compensation.

  19. Transmission of electric fields due to distributed cloud charges in the atmosphere-ionosphere system

    NASA Astrophysics Data System (ADS)

    Paul, Suman; De, S. S.; Haldar, D. K.; Guha, G.

    2017-10-01

    The transmission of electric fields in the lower atmosphere by thunder clouds with a suitable charge distribution profile has been modeled. The electromagnetic responses of the atmosphere are presented through Maxwell's equations together with a time-varying source charge distribution. The conductivities are taken to be exponentially graded function of altitude. The radial and vertical electric field components are derived for isotropic, anisotropic and thundercloud regions. The analytical solutions for the total Maxwell's current which flows from the cloud into the ionosphere under DC and quasi-static conditions are obtained for isotropic region. We found that the effect of charge distribution in thunderclouds produced by lightning discharges diminishes rapidly with increasing altitudes. Also, it is found that time to reach Maxwell's currents a maximum is higher for higher altitudes.

  20. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less

  1. A study of the sources and sinks of methane and methyl chloroform using a global three-dimensional Lagrangian tropospheric tracer transport model

    NASA Technical Reports Server (NTRS)

    Taylor, John A.; Brasseur, G. P.; Zimmerman, P. R.; Cicerone, R. J.

    1991-01-01

    Sources and sinks of methane and methyl chloroform are investigated using a global three-dimensional Lagrangian tropospheric tracer transport model with parameterized hydroxyl and temperature fields. Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). The second model identified source regions for methane from rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies, indicating that either the assumption that a uniform fraction of NPP is converted to methane is not valid for rice paddies, or that NPP is underestimated for rice paddies, or that present methane emission estimates from rice paddies are too high.

  2. Evaluating the impact of improvements to the FLAMBE smoke source model on forecasts of aerosol distribution from NAAPS

    NASA Astrophysics Data System (ADS)

    Hyer, E. J.; Reid, J. S.

    2006-12-01

    As more forecast models aim to include aerosol and chemical species, there is a need for source functions for biomass burning emissions that are accurate, robust, and operable in real-time. NAAPS is a global aerosol forecast model running every six hours and forecasting distributions of biomass burning, industrial sulfate, dust, and sea salt aerosols. This model is run operationally by the U.S. Navy as an aid to planning. The smoke emissions used as input to the model are calculated from the data collected by the FLAMBE system, driven by near-real-time active fire data from GOES WF_ABBA and MODIS Rapid Response. The smoke source function uses land cover data to predict properties of detected fires based on literature data from experimental burns. This scheme is very sensitive to the choice of land cover data sets. In areas of rapid land cover change, the use of static land cover data can produce artifactual changes in emissions unrelated to real changes in fire patterns. In South America, this change may be as large as 40% over five years. We demonstrate the impact of a modified land cover scheme on FLAMBE emissions and NAAPS forecasts, including a fire size algorithm developed using MODIS burned area data. We also describe the effects of corrections to emissions estimates for cloud and satellite coverage. We outline areas where existing data sources are incomplete and improvements are required to achieve accurate modeling of biomass burning emissions in real time.

  3. LUMINOSITY FUNCTIONS OF LMXBs IN CENTAURUS A: GLOBULAR CLUSTERS VERSUS THE FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voss, Rasmus; Gilfanov, Marat; Sivakoff, Gregory R.

    2009-08-10

    We study the X-ray luminosity function (XLF) of low-mass X-ray binaries (LMXB) in the nearby early-type galaxy Centaurus A, concentrating primarily on two aspects of binary populations: the XLF behavior at the low-luminosity limit and the comparison between globular cluster and field sources. The 800 ksec exposure of the deep Chandra VLP program allows us to reach a limiting luminosity of {approx}8 x 10{sup 35} erg s{sup -1}, about {approx}2-3 times deeper than previous investigations. We confirm the presence of the low-luminosity break of the overall LMXB XLF at log(L{sub X} ) {approx} 37.2-37.6, below which the luminosity distribution followsmore » a dN/d(ln L) {approx} const law. Separating globular cluster and field sources, we find a statistically significant difference between the two luminosity distributions with a relative underabundance of faint sources in the globular cluster population. This demonstrates that the samples are drawn from distinct parent populations and may disprove the hypothesis that the entire LMXB population in early-type galaxies is created dynamically in globular clusters. As a plausible explanation for this difference in the XLFs, we suggest an enhanced fraction of helium-accreting systems in globular clusters, which are created in collisions between red giants and neutron stars. Due to the four times higher ionization temperature of He, such systems are subject to accretion disk instabilities at {approx}20 times higher mass accretion rate and, therefore, are not observed as persistent sources at low luminosities.« less

  4. Effect of high energy electrons on H{sup −} production and destruction in a high current DC negative ion source for cyclotron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onai, M., E-mail: onai@ppl.appi.keio.ac.jp; Fujita, S.; Hatayama, A.

    2016-02-15

    Recently, a filament driven multi-cusp negative ion source has been developed for proton cyclotrons in medical applications. In this study, numerical modeling of the filament arc-discharge source plasma has been done with kinetic modeling of electrons in the ion source plasmas by the multi-cusp arc-discharge code and zero dimensional rate equations for hydrogen molecules and negative ions. In this paper, main focus is placed on the effects of the arc-discharge power on the electron energy distribution function and the resultant H{sup −} production. The modelling results reasonably explains the dependence of the H{sup −} extraction current on the arc-discharge powermore » in the experiments.« less

  5. Sediment delivery to the Gulf of Alaska: source mechanisms along a glaciated transform margin

    USGS Publications Warehouse

    Dobson, M.R.; O'Leary, D.; Veart, M.

    1998-01-01

    Sediment delivery to the Gulf of Alaska occurs via four areally extensive deep-water fans, sourced from grounded tidewater glaciers. During periods of climatic cooling, glaciers cross a narrow shelf and discharge sediment down the continental slope. Because the coastal terrain is dominated by fjords and a narrow, high-relief Pacific watershed, deposition is dominated by channellized point-source fan accumulations, the volumes of which are primarily a function of climate. The sediment distribution is modified by a long-term tectonic translation of the Pacific plate to the north along the transform margin. As a result, the deep-water fans are gradually moved away from the climatically controlled point sources. Sets of abandoned channels record the effect of translation during the Plio-Pleistocene.

  6. Monte Carlo calculated microdosimetric spread for cell nucleus-sized targets exposed to brachytherapy 125I and 192Ir sources and 60Co cell irradiation.

    PubMed

    Villegas, Fernanda; Tilly, Nina; Ahnesjö, Anders

    2013-09-07

    The stochastic nature of ionizing radiation interactions causes a microdosimetric spread in energy depositions for cell or cell nucleus-sized volumes. The magnitude of the spread may be a confounding factor in dose response analysis. The aim of this work is to give values for the microdosimetric spread for a range of doses imparted by (125)I and (192)Ir brachytherapy radionuclides, and for a (60)Co source. An upgraded version of the Monte Carlo code PENELOPE was used to obtain frequency distributions of specific energy for each of these radiation qualities and for four different cell nucleus-sized volumes. The results demonstrate that the magnitude of the microdosimetric spread increases when the target size decreases or when the energy of the radiation quality is reduced. Frequency distributions calculated according to the formalism of Kellerer and Chmelevsky using full convolution of the Monte Carlo calculated single track frequency distributions confirm that at doses exceeding 0.08 Gy for (125)I, 0.1 Gy for (192)Ir, and 0.2 Gy for (60)Co, the resulting distribution can be accurately approximated with a normal distribution. A parameterization of the width of the distribution as a function of dose and target volume of interest is presented as a convenient form for the use in response modelling or similar contexts.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions formore » the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.« less

  8. Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Fure, A. D.; Jawitz, J. W.

    2006-12-01

    Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).

  9. Using meta-information of a posteriori Bayesian solutions of the hypocentre location task for improving accuracy of location error estimation

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2015-06-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.

  10. Presence and distribution of organic wastewater compounds in wastewater, surface, ground, and drinking waters, Minnesota, 2000-02

    USGS Publications Warehouse

    Lee, Kathy E.; Barber, Larry B.; Furlong, Edward T.; Cahill, Jeffery D.; Kolpin, Dana W.; Meyer, Michael T.; Zaugg, Steven D.

    2004-01-01

    Results of this study indicate ubiquitous distribution of measured OWCs in the environment that originate from numerous sources and pathways. During this reconnaissance of OWCs in Minnesota it was not possible to determine the specific sources of OWCs to surface, ground, or drinking waters. The data indicate WWTP effluent is a major pathway of OWCs to surface waters and that landfill leachate at selected facilities is a potential source of OWCs to WWTPs. Aquatic organism or human exposure to some OWCs is likely based on OWC distribution. Few aquatic or human health standards or criteria exist for the OWCs analyzed, and the risks to humans or aquatic wildlife are not known. Some OWCs detected in this study are endocrine disrupters and have been found to disrupt or influence endocrine function in fish. Thirteen endocrine disrupters, 3-tert-butyl-4-hydoxyanisole (BHA), 4- cumylphenol, 4-normal-octylphenol, 4-tert-octylphenol, acetyl-hexamethyl-tetrahydro-naphthalene (AHTN), benzo[α]pyrene, beta-sitosterol, bisphenol-A, diazinon, nonylphenol diethoxylate (NP2EO), octyphenol diethoxylate (OP2EO), octylphenol monoethoxylate (OP1EO), and total para-nonylphenol (NP) were detected. Results of reconnaissance studies may help regulators who set water-quality standards begin to prioritize which OWCs to focus upon for given categories of water use.

  11. Cultured fungal associates from the deep-sea coral Lophelia pertusa

    NASA Astrophysics Data System (ADS)

    Galkiewicz, Julia P.; Stellick, Sarah H.; Gray, Michael A.; Kellogg, Christina A.

    2012-09-01

    The cold-water coral Lophelia pertusa provides important habitat to many deep-sea fishes and invertebrates. Studies of the microbial taxa associated with L. pertusa thus far have focused on bacteria, neglecting the microeukaryotic members. This is the first study to culture fungi from living L. pertusa and to investigate carbon source utilization by the fungal associates. Twenty-seven fungal isolates from seven families, including both filamentous and yeast morphotypes, were cultured from healthy L. pertusa colonies collected from the northern Gulf of Mexico, the West Florida Slope, and the western Atlantic Ocean off the Florida coast. Isolates from different sites were phylogenetically closely related, indicating these genera are widely distributed in association with L. pertusa. Biolog™ Filamentous Fungi microtiter plates were employed to determine the functional capacity of a subset of isolates to grow on varied carbon sources. While four of the isolates exhibited no growth on any provided carbon source, the rest (n=10) grew on 8.3-66.7% of carbon sources available. Carbohydrates, carboxylic acids, and amino acids were the most commonly metabolized carbon sources, with overlap between the carbon sources used and amino acids found in L. pertusa mucus. This study represents the first attempt to characterize a microeukaryotic group associated with L. pertusa. However, the functional role of fungi within the coral holobiont remains unclear.

  12. The XXL Survey. II. The bright cluster sample: catalogue and luminosity function

    NASA Astrophysics Data System (ADS)

    Pacaud, F.; Clerc, N.; Giles, P. A.; Adami, C.; Sadibekova, T.; Pierre, M.; Maughan, B. J.; Lieu, M.; Le Fèvre, J. P.; Alis, S.; Altieri, B.; Ardila, F.; Baldry, I.; Benoist, C.; Birkinshaw, M.; Chiappetti, L.; Démoclès, J.; Eckert, D.; Evrard, A. E.; Faccioli, L.; Gastaldello, F.; Guennou, L.; Horellou, C.; Iovino, A.; Koulouridis, E.; Le Brun, V.; Lidman, C.; Liske, J.; Maurogordato, S.; Menanteau, F.; Owers, M.; Poggianti, B.; Pomarède, D.; Pompei, E.; Ponman, T. J.; Rapetti, D.; Reiprich, T. H.; Smith, G. P.; Tuffs, R.; Valageas, P.; Valtchanov, I.; Willis, J. P.; Ziparo, F.

    2016-06-01

    Context. The XXL Survey is the largest survey carried out by the XMM-Newton satellite and covers a total area of 50 square degrees distributed over two fields. It primarily aims at investigating the large-scale structures of the Universe using the distribution of galaxy clusters and active galactic nuclei as tracers of the matter distribution. The survey will ultimately uncover several hundreds of galaxy clusters out to a redshift of ~2 at a sensitivity of ~10-14 erg s-1 cm-2 in the [0.5-2] keV band. Aims: This article presents the XXL bright cluster sample, a subsample of 100 galaxy clusters selected from the full XXL catalogue by setting a lower limit of 3 × 10-14 erg s-1 cm-2 on the source flux within a 1' aperture. Methods: The selection function was estimated using a mixture of Monte Carlo simulations and analytical recipes that closely reproduce the source selection process. An extensive spectroscopic follow-up provided redshifts for 97 of the 100 clusters. We derived accurate X-ray parameters for all the sources. Scaling relations were self-consistently derived from the same sample in other publications of the series. On this basis, we study the number density, luminosity function, and spatial distribution of the sample. Results: The bright cluster sample consists of systems with masses between M500 = 7 × 1013 and 3 × 1014 M⊙, mostly located between z = 0.1 and 0.5. The observed sky density of clusters is slightly below the predictions from the WMAP9 model, and significantly below the prediction from the Planck 2015 cosmology. In general, within the current uncertainties of the cluster mass calibration, models with higher values of σ8 and/or ΩM appear more difficult to accommodate. We provide tight constraints on the cluster differential luminosity function and find no hint of evolution out to z ~ 1. We also find strong evidence for the presence of large-scale structures in the XXL bright cluster sample and identify five new superclusters. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. Based on observations made with ESO Telescopes at the La Silla and Paranal Observatories under programme ID 089.A-0666 and LP191.A-0268.The Master Catalogue is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/592/A2

  13. Selective structural source identification

    NASA Astrophysics Data System (ADS)

    Totaro, Nicolas

    2018-04-01

    In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.

  14. Redox potential distribution of an organic-rich contaminated site obtained by the inversion of self-potential data

    NASA Astrophysics Data System (ADS)

    Abbas, M.; Jardani, A.; Soueid Ahmed, A.; Revil, A.; Brigaud, L.; Bégassat, Ph.; Dupont, J. P.

    2017-11-01

    Mapping the redox potential of shallow aquifers impacted by hydrocarbon contaminant plumes is important for the characterization and remediation of such contaminated sites. The redox potential of groundwater is indicative of the biodegradation of hydrocarbons and is important in delineating the shapes of contaminant plumes. The self-potential method was used to reconstruct the redox potential of groundwater associated with an organic-rich contaminant plume in northern France. The self-potential technique is a passive technique consisting in recording the electrical potential distribution at the surface of the Earth. A self-potential map is essentially the sum of two contributions, one associated with groundwater flow referred to as the electrokinetic component, and one associated with redox potential anomalies referred to as the electroredox component (thermoelectric and diffusion potentials are generally negligible). A groundwater flow model was first used to remove the electrokinetic component from the observed self-potential data. Then, a residual self-potential map was obtained. The source current density generating the residual self-potential signals is assumed to be associated with the position of the water table, an interface characterized by a change in both the electrical conductivity and the redox potential. The source current density was obtained through an inverse problem by minimizing a cost function including a data misfit contribution and a regularizer. This inversion algorithm allows the determination of the vertical and horizontal components of the source current density taking into account the electrical conductivity distribution of the saturated and non-saturated zones obtained independently by electrical resistivity tomography. The redox potential distribution was finally determined from the inverted residual source current density. A redox map was successfully built and the estimated redox potential values correlated well with in-situ measurements.

  15. A Spectroscopic Study of Young Stellar Objects in the Serpens Cloud Core and NGC 1333

    NASA Astrophysics Data System (ADS)

    Winston, E.; Megeath, S. T.; Wolk, S. J.; Hernandez, J.; Gutermuth, R.; Muzerolle, J.; Hora, J. L.; Covey, K.; Allen, L. E.; Spitzbart, B.; Peterson, D.; Myers, P.; Fazio, G. G.

    2009-06-01

    We present spectral observations of 130 young stellar objects (YSOs) in the Serpens Cloud Core and NGC 1333 embedded clusters. The observations consist of near-IR spectra in the H and K bands from SpeX on the IRTF and far-red spectra (6000-9000 Å) from Hectospec on the Multi-Mirror Telescope. These YSOs were identified in previous Spitzer and Chandra observations, and the evolutionary classes of the YSOs were determined from the Spitzer mid-IR photometry. With these spectra we search for corroborating evidence for the pre-main-sequence nature of the objects, study the properties of the detected emission lines as a function of evolutionary class, and obtain spectral types for the observed YSOs. The temperatures implied by the spectral types are combined with luminosities determined from the near-IR photometry to construct Hertzsprung-Russell (H-R) diagrams for the clusters. By comparing the positions of the YSOs in the H-R diagrams with the pre-main-sequence tracks of Baraffe (1998), we determine the ages of the embedded sources and study the relative ages of the YSOs with and without optically thick circumstellar disks. The apparent isochronal ages of the YSOs in both clusters range from less than 1 Myr to 10 Myr, with most objects below 3 Myr. The observed distributions of ages for the Class II and Class III objects are statistically indistinguishable. We examine the spatial distribution and extinction of the YSOs as a function of their isochronal ages. We find the sources <3 Myr to be concentrated in the molecular cloud gas, while the older sources are spatially dispersed and are not deeply embedded. Nonetheless, the sources with isochronal ages >3 Myr show all the characteristics of YSOs in their spectra, their IR spectral energy distributions, and their X-ray emission; we find no evidence that they are contaminating background giants or foreground dwarfs. However, we find no corresponding decrease in the fraction of sources with infrared excess with isochronal age; this suggests that the older isochronal ages may not measure the true age of the >3 Myr YSOs. Thus, the nature of the apparently older sources and their implications for cluster formation remain unresolved.

  16. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  17. Distributed watershed modeling of design storms to identify nonpoint source loading areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endreny, T.A.; Wood, E.F.

    1999-03-01

    Watershed areas that generate nonpoint source (NPS) polluted runoff need to be identified prior to the design of basin-wide water quality projects. Current watershed-scale NPS models lack a variable source area (VSA) hydrology routine, and are therefore unable to identify spatially dynamic runoff zones. The TOPLATS model used a watertable-driven VSA hydrology routine to identify runoff zones in a 17.5 km{sup 2} agricultural watershed in central Oklahoma. Runoff areas were identified in a static modeling framework as a function of prestorm watertable depth and also in a dynamic modeling framework by simulating basin response to 2, 10, and 25 yrmore » return period 6 h design storms. Variable source area expansion occurred throughout the duration of each 6 h storm and total runoff area increased with design storm intensity. Basin-average runoff rates of 1 mm h{sup {minus}1} provided little insight into runoff extremes while the spatially distributed analysis identified saturation excess zones with runoff rates equaling effective precipitation. The intersection of agricultural landcover areas with these saturation excess runoff zones targeted the priority potential NPS runoff zones that should be validated with field visits. These intersected areas, labeled as potential NPS runoff zones, were mapped within the watershed to demonstrate spatial analysis options available in TOPLATS for managing complex distributions of watershed runoff. TOPLATS concepts in spatial saturation excess runoff modelling should be incorporated into NPS management models.« less

  18. Comment on “Characterizing the population of pulsars in the Galactic bulge with the Fermi large area telescope” [arXiv:1705.00009v1

    DOE PAGES

    Bartels, Richard

    2018-04-24

    Here, themore » $$\\textit{Fermi}$$-LAT Collaboration recently presented a new catalog of gamma-ray sources located within the $$40^{\\circ} \\times 40^{\\circ}$$ region around the Galactic Center~(Ajello et al. 2017) -- the Second Fermi Inner Galaxy (2FIG) catalog. Utilizing this catalog, they analyzed models for the spatial distribution and luminosity function of sources with a pulsar-like gamma-ray spectrum. Ajello et al. 2017 v1 also claimed to detect, in addition to a disk-like population of pulsar-like sources, an approximately 7$$\\sigma$$ preference for an additional centrally concentrated population of pulsar-like sources, which they referred to as a "Galactic Bulge" population. Such a population would be of great interest, as it would support a pulsar interpretation of the gamma-ray excess that has long been observed in this region. In an effort to further explore the implications of this new source catalog, we attempted to reproduce the results presented by the $$\\textit{Fermi}$$-LAT Collaboration, but failed to do so. Mimicking as closely as possible the analysis techniques undertaken in Ajello et al. 2017, we instead find that our likelihood analysis favors a very different spatial distribution and luminosity function for these sources. Most notably, our results do not exhibit a strong preference for a "Galactic Bulge" population of pulsars. Furthermore, we find that masking the regions immediately surrounding each of the 2FIG pulsar candidates does $$\\textit{not}$$ significantly impact the spectrum or intensity of the Galactic Center gamma-ray excess. Although these results refute the claim of strong evidence for a centrally concentrated pulsar population presented in Ajello et al. 2017, they neither rule out nor provide support for the possibility that the Galactic Center excess is generated by a population of low-luminosity and currently largely unobserved pulsars.« less

  19. Comment on “Characterizing the population of pulsars in the Galactic bulge with the Fermi large area telescope” [arXiv:1705.00009v1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartels, Richard

    Here, themore » $$\\textit{Fermi}$$-LAT Collaboration recently presented a new catalog of gamma-ray sources located within the $$40^{\\circ} \\times 40^{\\circ}$$ region around the Galactic Center~(Ajello et al. 2017) -- the Second Fermi Inner Galaxy (2FIG) catalog. Utilizing this catalog, they analyzed models for the spatial distribution and luminosity function of sources with a pulsar-like gamma-ray spectrum. Ajello et al. 2017 v1 also claimed to detect, in addition to a disk-like population of pulsar-like sources, an approximately 7$$\\sigma$$ preference for an additional centrally concentrated population of pulsar-like sources, which they referred to as a "Galactic Bulge" population. Such a population would be of great interest, as it would support a pulsar interpretation of the gamma-ray excess that has long been observed in this region. In an effort to further explore the implications of this new source catalog, we attempted to reproduce the results presented by the $$\\textit{Fermi}$$-LAT Collaboration, but failed to do so. Mimicking as closely as possible the analysis techniques undertaken in Ajello et al. 2017, we instead find that our likelihood analysis favors a very different spatial distribution and luminosity function for these sources. Most notably, our results do not exhibit a strong preference for a "Galactic Bulge" population of pulsars. Furthermore, we find that masking the regions immediately surrounding each of the 2FIG pulsar candidates does $$\\textit{not}$$ significantly impact the spectrum or intensity of the Galactic Center gamma-ray excess. Although these results refute the claim of strong evidence for a centrally concentrated pulsar population presented in Ajello et al. 2017, they neither rule out nor provide support for the possibility that the Galactic Center excess is generated by a population of low-luminosity and currently largely unobserved pulsars.« less

  20. Comment on "Characterizing the population of pulsars in the Galactic bulge with the Fermi large area telescope" [arXiv:1705.00009v1

    NASA Astrophysics Data System (ADS)

    Bartels, Richard; Hooper, Dan; Linden, Tim; Mishra-Sharma, Siddharth; Rodd, Nicholas L.; Safdi, Benjamin R.; Slatyer, Tracy R.

    2018-06-01

    The Fermi-LAT Collaboration recently presented a new catalog of gamma-ray sources located within the 40 ° × 40 ° region around the Galactic Center Ajello et al. (2017) - the Second Fermi Inner Galaxy (2FIG) catalog. Utilizing this catalog, they analyzed models for the spatial distribution and luminosity function of sources with a pulsar-like gamma-ray spectrum. Ajello et al. (2017) v1 also claimed to detect, in addition to a disk-like population of pulsar-like sources, an approximately 7 σ preference for an additional centrally concentrated population of pulsar-like sources, which they referred to as a "Galactic Bulge" population. Such a population would be of great interest, as it would support a pulsar interpretation of the gamma-ray excess that has long been observed in this region. In an effort to further explore the implications of this new source catalog, we attempted to reproduce the results presented by the Fermi-LAT Collaboration, but failed to do so. Mimicking as closely as possible the analysis techniques undertaken in Ajello et al. (2017), we instead find that our likelihood analysis favors a very different spatial distribution and luminosity function for these sources. Most notably, our results do not exhibit a strong preference for a "Galactic Bulge" population of pulsars. Furthermore, we find that masking the regions immediately surrounding each of the 2FIG pulsar candidates does not significantly impact the spectrum or intensity of the Galactic Center gamma-ray excess. Although these results refute the claim of strong evidence for a centrally concentrated pulsar population presented in Ajello et al. (2017), they neither rule out nor provide support for the possibility that the Galactic Center excess is generated by a population of low-luminosity and currently largely unobserved pulsars. In a spirit of maximal openness and transparency, we have made our analysis code available at https://github.com/bsafdi/GCE-2FIG.

  1. Investigation of the Photon Strength Function in 130 Te

    NASA Astrophysics Data System (ADS)

    Isaak, J.; Beller, J.; Fiori, E.; Glorius, J.; Krtička, M.; Löher, B.; Pietralla, N.; Romig, C.; Rusev, G.; Savran, D.; Scheck, M.; Silva, J.; Sonnabend, K.; Tonchev, A. P.; Tornow, W.; Weller, H. R.; Zweidinger, M.

    2016-01-01

    The dipole strength distribution of 130Te was investigated with the method of Nuclear Resonance Fluorescence using continuous-energy bremsstrahlung at the Darmstadt High Intensity Photon Setup and quasi-monoenergetic photons at the High Intensity γ-Ray Source. The average decay properties were determined between 5.50 and 8.15 MeV and compared to simulations within the statistical model.

  2. Noisy scale-free networks

    NASA Astrophysics Data System (ADS)

    Scholz, Jan; Dejori, Mathäus; Stetter, Martin; Greiner, Martin

    2005-05-01

    The impact of observational noise on the analysis of scale-free networks is studied. Various noise sources are modeled as random link removal, random link exchange and random link addition. Emphasis is on the resulting modifications for the node-degree distribution and for a functional ranking based on betweenness centrality. The implications for estimated gene-expressed networks for childhood acute lymphoblastic leukemia are discussed.

  3. Flexible configuration-interaction shell-model many-body solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.

    BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.

  4. Infrared spectra and interstellar reddening of anonymous type II OH/IR stars

    NASA Technical Reports Server (NTRS)

    Gehrz, R. D.; Hackwell, J. A.; Grasdalen, G. L.; Kleinmann, S. G.; Mason, S.

    1985-01-01

    Infrared positions and multicolor infrared photometry for a sample of type II OH/IR stars are reported. The infrared colors and 11.4-micron silicate optical depths of the confirmed sources in this group increase as a function of distance, suggesting that interstellar reddening must be taken into account in assessing their infrared energy distributions and physical characteristics.

  5. Distributed stimulation increases force elicited with functional electrical stimulation

    NASA Astrophysics Data System (ADS)

    Buckmire, Alie J.; Lockwood, Danielle R.; Doane, Cynthia J.; Fuglevand, Andrew J.

    2018-04-01

    Objective. The maximum muscle forces that can be evoked using functional electrical stimulation (FES) are relatively modest. The reason for this weakness is not fully understood but could be partly related to the widespread distribution of motor nerve branches within muscle. As such, a single stimulating electrode (as is conventionally used) may be incapable of activating the entire array of motor axons supplying a muscle. Therefore, the objective of this study was to determine whether stimulating a muscle with more than one source of current could boost force above that achievable with a single source. Approach. We compared the maximum isometric forces that could be evoked in the anterior deltoid of anesthetized monkeys using one or two intramuscular electrodes. We also evaluated whether temporally interleaved stimulation between two electrodes might reduce fatigue during prolonged activity compared to synchronized stimulation through two electrodes. Main results. We found that dual electrode stimulation consistently produced greater force (~50% greater on average) than maximal stimulation with single electrodes. No differences, however, were found in the fatigue responses using interleaved versus synchronized stimulation. Significance. It seems reasonable to consider using multi-electrode stimulation to augment the force-generating capacity of muscles and thereby increase the utility of FES systems.

  6. The binaural performance of a cross-talk cancellation system with matched or mismatched setup and playback acoustics.

    PubMed

    Akeroyd, Michael A; Chambers, John; Bullock, David; Palmer, Alan R; Summerfield, A Quentin; Nelson, Philip A; Gatehouse, Stuart

    2007-02-01

    Cross-talk cancellation is a method for synthesizing virtual auditory space using loudspeakers. One implementation is the "Optimal Source Distribution" technique [T. Takeuchi and P. Nelson, J. Acoust. Soc. Am. 112, 2786-2797 (2002)], in which the audio bandwidth is split across three pairs of loudspeakers, placed at azimuths of +/-90 degrees, +/-15 degrees, and +/-3 degrees, conveying low, mid, and high frequencies, respectively. A computational simulation of this system was developed and verified against measurements made on an acoustic system using a manikin. Both the acoustic system and the simulation gave a wideband average cancellation of almost 25 dB. The simulation showed that when there was a mismatch between the head-related transfer functions used to set up the system and those of the final listener, the cancellation was reduced to an average of 13 dB. Moreover, in this case the binaural interaural time differences and interaural level differences delivered by the simulation of the optimal source distribution (OSD) system often differed from the target values. It is concluded that only when the OSD system is set up with "matched" head-related transfer functions can it deliver accurate binaural cues.

  7. Magnetotail Structure and its Internal Particle Dynamics During Northward IMF

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, M.; Raeder, J.; El-Alaoui, M.; Peroomian, V.

    1998-01-01

    This study uses Global magnetohydrodynamic (MHD) simulations driven by solar wind data along with Geotail observations of the magnetotail to investigate the magnetotail's response to changes in the interplanetary magnetic field (IMF); observed events used in the study occurred on March 29, 1993 and February 9, 1995. For events from February 9, 1995, we also use the time-dependent MHD magnetic and electric fields and the large-scale kinetic (LSK) technique to examine changes in the Geotail ion velocity distributions. Our MHD simulation shows that on March 29, 1993, during a long period of steady northward IMF, the tail was strongly squeezed and twisted around the Sun-Earth axis in response to variations in the IMF B(sub y) component. The mixed (magnetotail and magnetosheath) plasma observed by Geotail results from the spacecraft's close proximity to the magnetopause and its frequent crossings of this boundary. In our second example (February 9, 1995) the IMF was also steady and northward, and in addition had a significant B(sub y) component. Again the magnetotail was twisted, but not as strongly as on March 29, 1993. The Geotail spacecraft, located approximately 30 R(sub E) downtail, observed highly structured ion distribution functions. Using the time-dependent LSK technique, we investigate the ion sources and acceleration mechanisms affecting the Geotail distribution functions during this interval. At 1325 UT most ions are found to enter the magnetosphere on the dusk side earthward of Geotail with a secondary source on the dawn side in the low latitude boundary layer (LLBL). A small percentage come from the ionosphere. By 1347 UT the majority of the ions come from the dawn side LLBL. The distribution functions measured during the later time interval are much warmer, mainly because particles reaching the spacecraft from the dawn side are affected by nonadiabatic scattering and acceleration in the neutral sheet.

  8. Magnetotail Structure and its Internal Particle Dynamics During Northward IMF

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalia, M.; El-Alaoui, M.; Peroomian, V.

    1998-01-01

    This study uses Global magnetohydrodynamic (MHD) simulations driven by solar wind data along with Geotail observations of the magnetotail to investigate the magnetotail's response to changes in the interplanetary magnetic field (IMF); observed events used in the study occurred on March 29, 1993 and February 9, 1995. For events from February 9, 1995, we also use the time-dependent MHD magnetic and electric fields and the large-scale kinetic (LSK) technique to examine changes in the Geotail ion velocity distributions. Our MHD simulation shows that on March 29, 1993, during a long period of steady northward IMF, the tail was strongly squeezed and twisted around the Sun-Earth axis in response to variations in the IMF B(sub y) component. The mixed (magnetotail and magnetosheath) plasma observed by Geotail results from the spacecraft's close proximity to the magnetopause and its frequent crossings of this boundary. In our second example (February 9, 1995) the IMF was also steady and northward, and in addition had a significant B(sub y) component. Again the magnetotail was twisted, but not as strongly as on March 29, 1993. The Geotail spacecraft, located approximately 30 R(sub E) downtail, observed highly structured ion distribution functions. Using the time-dependent LSK technique, we investigate the ion sources and acceleration mechanisms affecting the Geotail distribution functions during this interval. At 1325 UT most ions are found to enter the magnetosphere on the dusk side earthward of Geotail with a secondary source on the dawn side in the low latitude boundary layer (LLBL). A small percentage come from the ionosphere. By 1347 UT the majority of the ions come from the dawn side LLBL. The distribution functions measured during the later time interval are much warmer, mainly because particles reaching the spacecraft from the dawnside are affected by nonadiabatic scattering and acceleration in the neutral sheet.

  9. A High-Speed, Real-Time Visualization and State Estimation Platform for Monitoring and Control of Electric Distribution Systems: Implementation and Field Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta

    Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less

  10. Turning Noise into Signal: Utilizing Impressed Pipeline Currents for EM Exploration

    NASA Astrophysics Data System (ADS)

    Lindau, Tobias; Becken, Michael

    2017-04-01

    Impressed Current Cathodic Protection (ICCP) systems are extensively used for the protection of central Europe's dense network of oil-, gas- and water pipelines against destruction by electrochemical corrosion. While ICCP systems usually provide protection by injecting a DC current into the pipeline, mandatory pipeline integrity surveys demand a periodical switching of the current. Consequently, the resulting time varying pipe currents induce secondary electric- and magnetic fields in the surrounding earth. While these fields are usually considered to be unwanted cultural noise in electromagnetic exploration, this work aims at utilizing the fields generated by the ICCP system for determining the electrical resistivity of the subsurface. The fundamental period of the switching cycles typically amounts to 15 seconds in Germany and thereby roughly corresponds to periods used in controlled source EM applications (CSEM). For detailed studies we chose an approximately 30km long pipeline segment near Herford, Germany as a test site. The segment is located close to the southern margin of the Lower Saxony Basin (LSB) and part of a larger gas pipeline composed of multiple segments. The current injected into the pipeline segment originates in a rectified 50Hz AC signal which is periodically switched on and off. In contrast to the usual dipole sources used in CSEM surveys, the current distribution along the pipeline is unknown and expected to be non-uniform due to coating defects that cause current to leak into the surrounding soil. However, an accurate current distribution is needed to model the fields generated by the pipeline source. We measured the magnetic fields at several locations above the pipeline and used Biot-Savarts-Law to estimate the currents decay function. The resulting frequency dependent current distribution shows a current decay away from the injection point as well as a frequency dependent phase shift which is increasing with distance from the injection point. Electric field data were recorded at 45 stations located in an area of about 60 square kilometers in the vicinity to the pipeline. Additionally, the injected source current was recorded directly at the injection point. Transfer functions between the local electric fields and the injected source current are estimated for frequencies ranging from 0.03Hz to 15Hz using robust time series processing techniques. The resulting transfer functions are inverted for a 3D conductivity model of the subsurface using an elaborate pipeline model. We interpret the model with regards to the local geologic setting, demonstrating the methods capabilities to image the subsurface.

  11. scarlet: Source separation in multi-band images by Constrained Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert

    2018-03-01

    SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.

  12. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.

  13. Evaluation of interpolation methods for TG-43 dosimetric parameters based on comparison with Monte Carlo data for high-energy brachytherapy sources.

    PubMed

    Pujades-Claumarchirant, Ma Carmen; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo; Melhus, Christopher; Rivard, Mark

    2010-03-01

    The aim of this work was to determine dose distributions for high-energy brachytherapy sources at spatial locations not included in the radial dose function g L ( r ) and 2D anisotropy function F ( r , θ ) table entries for radial distance r and polar angle θ . The objectives of this study are as follows: 1) to evaluate interpolation methods in order to accurately derive g L ( r ) and F ( r , θ ) from the reported data; 2) to determine the minimum number of entries in g L ( r ) and F ( r , θ ) that allow reproduction of dose distributions with sufficient accuracy. Four high-energy photon-emitting brachytherapy sources were studied: 60 Co model Co0.A86, 137 Cs model CSM-3, 192 Ir model Ir2.A85-2, and 169 Yb hypothetical model. The mesh used for r was: 0.25, 0.5, 0.75, 1, 1.5, 2-8 (integer steps) and 10 cm. Four different angular steps were evaluated for F ( r , θ ): 1°, 2°, 5° and 10°. Linear-linear and logarithmic-linear interpolation was evaluated for g L ( r ). Linear-linear interpolation was used to obtain F ( r , θ ) with resolution of 0.05 cm and 1°. Results were compared with values obtained from the Monte Carlo (MC) calculations for the four sources with the same grid. Linear interpolation of g L ( r ) provided differences ≤ 0.5% compared to MC for all four sources. Bilinear interpolation of F ( r , θ ) using 1° and 2° angular steps resulted in agreement ≤ 0.5% with MC for 60 Co, 192 Ir, and 169 Yb, while 137 Cs agreement was ≤ 1.5% for θ < 15°. The radial mesh studied was adequate for interpolating g L ( r ) for high-energy brachytherapy sources, and was similar to commonly found examples in the published literature. For F ( r , θ ) close to the source longitudinal-axis, polar angle step sizes of 1°-2° were sufficient to provide 2% accuracy for all sources.

  14. The research of distributed interactive simulation based on HLA in coal mine industry inherent safety

    NASA Astrophysics Data System (ADS)

    Dou, Zhi-Wu

    2010-08-01

    To solve the inherent safety problem puzzling the coal mining industry, analyzing the characteristic and the application of distributed interactive simulation based on high level architecture (DIS/HLA), a new method is proposed for developing coal mining industry inherent safety distributed interactive simulation adopting HLA technology. Researching the function and structure of the system, a simple coal mining industry inherent safety is modeled with HLA, the FOM and SOM are developed, and the math models are suggested. The results of the instance research show that HLA plays an important role in developing distributed interactive simulation of complicated distributed system and the method is valid to solve the problem puzzling coal mining industry. To the coal mining industry, the conclusions show that the simulation system with HLA plays an important role to identify the source of hazard, to make the measure for accident, and to improve the level of management.

  15. Evolution of Scientific and Technical Information Distribution

    NASA Technical Reports Server (NTRS)

    Esler, Sandra; Nelson, Michael L.

    1998-01-01

    World Wide Web (WWW) and related information technologies are transforming the distribution of scientific and technical information (STI). We examine 11 recent, functioning digital libraries focusing on the distribution of STI publications, including journal articles, conference papers, and technical reports. We introduce 4 main categories of digital library projects: based on the architecture (distributed vs. centralized) and the contributor (traditional publisher vs. authoring individual/organization). Many digital library prototypes merely automate existing publishing practices or focus solely on the digitization of the publishing cycle output, not sampling and capturing elements of the input. Still others do not consider for distribution the large body of "gray literature." We address these deficiencies in the current model of STI exchange by suggesting methods for expanding the scope and target of digital libraries by focusing on a greater source of technical publications and using "buckets," an object-oriented construct for grouping logically related information objects, to include holdings other than technical publications.

  16. Gravitational potential wells and the cosmic bulk flow

    NASA Astrophysics Data System (ADS)

    Wang, Yuyu; Kumar, Abhinav; Feldman, Hume; Watkins, Richard

    2016-03-01

    The bulk flow is a volume average of the peculiar velocities and a useful probe of the mass distribution on large scales. The gravitational instability model views the bulk flow as a potential flow that obeys a Maxwellian Distribution. We use two N-body simulations, the LasDamas Carmen and the Horizon Run, to calculate the bulk flows of various sized volumes in the simulation boxes. Once we have the bulk flow velocities as a function of scale, we investigate the mass and gravitational potential distribution around the volume. We found that matter densities can be asymmetrical and difficult to detect in real surveys, however, the gravitational potential and its gradient may provide better tools to investigate the underlying matter distribution. This study shows that bulk flows are indeed potential flows and thus provides information on the flow sources. We also show that bulk flow magnitudes follow a Maxwellian distribution on scales > 10h-1 Mpc.

  17. BATSE analysis techniques for probing the GRB spatial and luminosity distributions

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon; Meegan, Charles A.

    1992-01-01

    The Burst And Transient Source Experiment (BATSE) has measured homogeneity and isotropy parameters from an increasingly large sample of observed gamma-ray bursts (GRBs), while also maintaining a summary of the way in which the sky has been sampled. Measurement of both of these are necessary for any study of the BATSE data statistically, as they take into account the most serious observational selection effects known in the study of GRBs: beam-smearing and inhomogeneous, anisotropic sky sampling. Knowledge of these effects is important to analysis of GRB angular and intensity distributions. In addition to determining that the bursts are local, it is hoped that analysis of such distributions will allow boundaries to be placed on the true GRB spatial distribution and luminosity function. The technique for studying GRB spatial and luminosity distributions is direct. Results of BATSE analyses are compared to Monte Carlo models parameterized by a variety of spatial and luminosity characteristics.

  18. On the power output of some idealized source configurations with one or more characteristic dimensions

    NASA Technical Reports Server (NTRS)

    Levine, H.

    1982-01-01

    The calculation of power output from a (finite) linear array of equidistant point sources is investigated with allowance for a relative phase shift and particular focus on the circumstances of small/large individual source separation. A key role is played by the estimates found for a twin parameter definite integral that involves the Fejer kernel functions, where N denotes a (positive) integer; these results also permit a quantitative accounting of energy partition between the principal and secondary lobes of the array pattern. Continuously distributed sources along a finite line segment or an open ended circular cylindrical shell are considered, and estimates for the relatively lower output in the latter configuration are made explicit when the shell radius is small compared to the wave length. A systematic reduction of diverse integrals which characterize the energy output from specific line and strip sources is investigated.

  19. EMITTING ELECTRONS AND SOURCE ACTIVITY IN MARKARIAN 501

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo

    2012-07-10

    We study the variation of the broadband spectral energy distribution (SED) of the BL Lac object Mrk 501 as a function of source activity, from quiescent to flaring. Through {chi}{sup 2}-minimization we model eight simultaneous SED data sets with a one-zone synchrotron self-Compton (SSC) model, and examine how model parameters vary with source activity. The emerging variability pattern of Mrk 501 is complex, with the Compton component arising from {gamma}-e scatterings that sometimes are (mostly) Thomson and sometimes (mostly) extreme Klein-Nishina. This can be seen from the variation of the Compton to synchrotron peak distance according to source state. Themore » underlying electron spectra are faint/soft in quiescent states and bright/hard in flaring states. A comparison with Mrk 421 suggests that the typical values of the SSC parameters are different in the two sources: however, in both jets the energy density is particle-dominated in all states.« less

  20. LOFAR-Boötes: properties of high- and low-excitation radio galaxies at 0.5 < z < 2.0

    NASA Astrophysics Data System (ADS)

    Williams, W. L.; Calistro Rivera, G.; Best, P. N.; Hardcastle, M. J.; Röttgering, H. J. A.; Duncan, K. J.; de Gasperin, F.; Jarvis, M. J.; Miley, G. K.; Mahony, E. K.; Morabito, L. K.; Nisbet, D. M.; Prandoni, I.; Smith, D. J. B.; Tasse, C.; White, G. J.

    2018-04-01

    This paper presents a study of the redshift evolution of radio-loud active galactic nuclei (AGN) as a function of the properties of their galaxy hosts in the Boötes field. To achieve this we match low-frequency radio sources from deep 150-MHz LOFAR (LOw Frequency ARray) observations to an I-band-selected catalogue of galaxies, for which we have derived photometric redshifts, stellar masses, and rest-frame colours. We present spectral energy distribution (SED) fitting to determine the mid-infrared AGN contribution for the radio sources and use this information to classify them as high- versus low-excitation radio galaxies (HERGs and LERGs) or star-forming galaxies. Based on these classifications, we construct luminosity functions for the separate redshift ranges going out to z = 2. From the matched radio-optical catalogues, we select a sub-sample of 624 high power (P150 MHz > 1025 W Hz-1) radio sources between 0.5 ≤ z < 2. For this sample, we study the fraction of galaxies hosting HERGs and LERGs as a function of stellar mass and host galaxy colour. The fraction of HERGs increases with redshift, as does the fraction of sources in galaxies with lower stellar masses. We find that the fraction of galaxies that host LERGs is a strong function of stellar mass as it is in the local Universe. This, combined with the strong negative evolution of the LERG luminosity functions over this redshift range, is consistent with LERGs being fuelled by hot gas in quiescent galaxies.

  1. Real-time strategy game training: emergence of a cognitive flexibility trait.

    PubMed

    Glass, Brian D; Maddox, W Todd; Love, Bradley C

    2013-01-01

    Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function.

  2. Obtaining the phase in the star test using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Salazar Romero, Marcos A.; Vazquez-Montiel, Sergio; Cornejo-Rodriguez, Alejandro

    2004-10-01

    The star test is conceptually perhaps the most basic and simplest of all methods of testing image-forming optical systems, the irradiance distribution at the image of a point source (such as a star) is give for the Point Spread Function, PSF. The PSF is very sensitive to aberrations. One way to quantify the PSF is measuring the irradiance distribution on the image of the source point. On the other hand, if we know the aberrations introduced by the optical systems and utilizing the diffraction theory then we can calculate the PSF. In this work we propose a method in order to find the wavefront aberrations starting from the PSF, transforming the problem of fitting a polynomial of aberrations in a problem of optimization using Genetic Algorithm. Also, we show that this method is immune to the noise introduced in the register or recording of the image. Results of these methods are shown.

  3. Ultrabroadband direct detection of nonclassical photon statistics at telecom wavelength

    PubMed Central

    Wakui, Kentaro; Eto, Yujiro; Benichi, Hugo; Izumi, Shuro; Yanagida, Tetsufumi; Ema, Kazuhiro; Numata, Takayuki; Fukuda, Daiji; Takeoka, Masahiro; Sasaki, Masahide

    2014-01-01

    Broadband light sources play essential roles in diverse fields, such as high-capacity optical communications, optical coherence tomography, optical spectroscopy, and spectrograph calibration. Although a nonclassical state from spontaneous parametric down-conversion may serve as a quantum counterpart, its detection and characterization have been a challenging task. Here we demonstrate the direct detection of photon numbers of an ultrabroadband (110 nm FWHM) squeezed state in the telecom band centred at 1535 nm wavelength, using a superconducting transition-edge sensor. The observed photon-number distributions violate Klyshko's criterion for the nonclassicality. From the observed photon-number distribution, we evaluate the second- and third-order correlation functions, and characterize a multimode structure, which implies that several tens of orthonormal modes of squeezing exist in the single optical pulse. Our results and techniques open up a new possibility to generate and characterize frequency-multiplexed nonclassical light sources for quantum info-communications technology. PMID:24694515

  4. Ultrabroadband direct detection of nonclassical photon statistics at telecom wavelength.

    PubMed

    Wakui, Kentaro; Eto, Yujiro; Benichi, Hugo; Izumi, Shuro; Yanagida, Tetsufumi; Ema, Kazuhiro; Numata, Takayuki; Fukuda, Daiji; Takeoka, Masahiro; Sasaki, Masahide

    2014-04-03

    Broadband light sources play essential roles in diverse fields, such as high-capacity optical communications, optical coherence tomography, optical spectroscopy, and spectrograph calibration. Although a nonclassical state from spontaneous parametric down-conversion may serve as a quantum counterpart, its detection and characterization have been a challenging task. Here we demonstrate the direct detection of photon numbers of an ultrabroadband (110 nm FWHM) squeezed state in the telecom band centred at 1535 nm wavelength, using a superconducting transition-edge sensor. The observed photon-number distributions violate Klyshko's criterion for the nonclassicality. From the observed photon-number distribution, we evaluate the second- and third-order correlation functions, and characterize a multimode structure, which implies that several tens of orthonormal modes of squeezing exist in the single optical pulse. Our results and techniques open up a new possibility to generate and characterize frequency-multiplexed nonclassical light sources for quantum info-communications technology.

  5. Real-Time Strategy Game Training: Emergence of a Cognitive Flexibility Trait

    PubMed Central

    Glass, Brian D.; Maddox, W. Todd; Love, Bradley C.

    2013-01-01

    Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function. PMID:23950921

  6. Optical arc sensor using energy harvesting power source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Kyoo Nam, E-mail: knchoi@inu.ac.kr; Rho, Hee Hyuk, E-mail: rdoubleh0902@inu.ac.kr

    Wireless sensors without external power supply gained considerable attention due to convenience both in installation and operation. Optical arc detecting sensor equipping with self sustaining power supply using energy harvesting method was investigated. Continuous energy harvesting method was attempted using thermoelectric generator to supply standby power in micro ampere scale and operating power in mA scale. Peltier module with heat-sink was used for high efficiency electricity generator. Optical arc detecting sensor with hybrid filter showed insensitivity to fluorescent and incandescent lamps under simulated distribution panel condition. Signal processing using integrating function showed selective arc discharge detection capability to different arcmore » energy levels, with a resolution below 17 J energy difference, unaffected by bursting arc waveform. The sensor showed possibility for application to arc discharge detecting sensor in power distribution panel. Also experiment with proposed continuous energy harvesting method using thermoelectric power showed possibility as a self sustainable power source of remote sensor.« less

  7. Insights into a spatially embedded social network from a large-scale snowball sample

    NASA Astrophysics Data System (ADS)

    Illenberger, J.; Kowald, M.; Axhausen, K. W.; Nagel, K.

    2011-12-01

    Much research has been conducted to obtain insights into the basic laws governing human travel behaviour. While the traditional travel survey has been for a long time the main source of travel data, recent approaches to use GPS data, mobile phone data, or the circulation of bank notes as a proxy for human travel behaviour are promising. The present study proposes a further source of such proxy-data: the social network. We collect data using an innovative snowball sampling technique to obtain details on the structure of a leisure-contacts network. We analyse the network with respect to its topology, the individuals' characteristics, and its spatial structure. We further show that a multiplication of the functions describing the spatial distribution of leisure contacts and the frequency of physical contacts results in a trip distribution that is consistent with data from the Swiss travel survey.

  8. Optical arc sensor using energy harvesting power source

    NASA Astrophysics Data System (ADS)

    Choi, Kyoo Nam; Rho, Hee Hyuk

    2016-06-01

    Wireless sensors without external power supply gained considerable attention due to convenience both in installation and operation. Optical arc detecting sensor equipping with self sustaining power supply using energy harvesting method was investigated. Continuous energy harvesting method was attempted using thermoelectric generator to supply standby power in micro ampere scale and operating power in mA scale. Peltier module with heat-sink was used for high efficiency electricity generator. Optical arc detecting sensor with hybrid filter showed insensitivity to fluorescent and incandescent lamps under simulated distribution panel condition. Signal processing using integrating function showed selective arc discharge detection capability to different arc energy levels, with a resolution below 17J energy difference, unaffected by bursting arc waveform. The sensor showed possibility for application to arc discharge detecting sensor in power distribution panel. Also experiment with proposed continuous energy harvesting method using thermoelectric power showed possibility as a self sustainable power source of remote sensor.

  9. Software-based measurement of thin filament lengths: an open-source GUI for Distributed Deconvolution analysis of fluorescence images

    PubMed Central

    Gokhin, David S.; Fowler, Velia M.

    2016-01-01

    The periodically arranged thin filaments within the striated myofibrils of skeletal and cardiac muscle have precisely regulated lengths, which can change in response to developmental adaptations, pathophysiological states, and genetic perturbations. We have developed a user-friendly, open-source ImageJ plugin that provides a graphical user interface (GUI) for super-resolution measurement of thin filament lengths by applying Distributed Deconvolution (DDecon) analysis to periodic line scans collected from fluorescence images. In the workflow presented here, we demonstrate thin filament length measurement using a phalloidin-stained cryosection of mouse skeletal muscle. The DDecon plugin is also capable of measuring distances of any periodically localized fluorescent signal from the Z- or M-line, as well as distances between successive Z- or M-lines, providing a broadly applicable tool for quantitative analysis of muscle cytoarchitecture. These functionalities can also be used to analyze periodic fluorescence signals in nonmuscle cells. PMID:27644080

  10. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Guo, J.; Bücherl, T.; Zou, Y.; Guo, Z.

    2011-09-01

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  11. Dynamic surface acoustic response to a thermal expansion source on an anisotropic half space.

    PubMed

    Zhao, Peng; Zhao, Ji-Cheng; Weaver, Richard

    2013-05-01

    The surface displacement response to a distributed thermal expansion source is solved using the reciprocity principle. By convolving the strain Green's function with the thermal stress field created by an ultrafast laser illumination, the complete surface displacement on an anisotropic half space induced by laser absorption is calculated in the time domain. This solution applies to the near field surface displacement due to pulse laser absorption. The solution is validated by performing ultrafast laser pump-probe measurements and showing very good agreement between the measured time-dependent probe beam deflection and the computed surface displacement.

  12. Rockfall hazard and risk assessments along roads at a regional scale: example in Swiss Alps

    NASA Astrophysics Data System (ADS)

    Michoud, C.; Derron, M.-H.; Horton, P.; Jaboyedoff, M.; Baillifard, F.-J.; Loye, A.; Nicolet, P.; Pedrazzini, A.; Queyrel, A.

    2012-03-01

    Unlike fragmental rockfall runout assessments, there are only few robust methods to quantify rock-mass-failure susceptibilities at regional scale. A detailed slope angle analysis of recent Digital Elevation Models (DEM) can be used to detect potential rockfall source areas, thanks to the Slope Angle Distribution procedure. However, this method does not provide any information on block-release frequencies inside identified areas. The present paper adds to the Slope Angle Distribution of cliffs unit its normalized cumulative distribution function. This improvement is assimilated to a quantitative weighting of slope angles, introducing rock-mass-failure susceptibilities inside rockfall source areas previously detected. Then rockfall runout assessment is performed using the GIS- and process-based software Flow-R, providing relative frequencies for runout. Thus, taking into consideration both susceptibility results, this approach can be used to establish, after calibration, hazard and risk maps at regional scale. As an example, a risk analysis of vehicle traffic exposed to rockfalls is performed along the main roads of the Swiss alpine valley of Bagnes.

  13. Frequency-selective fading statistics of shallow-water acoustic communication channel with a few multipaths

    NASA Astrophysics Data System (ADS)

    Bae, Minja; Park, Jihyun; Kim, Jongju; Xue, Dandan; Park, Kyu-Chil; Yoon, Jong Rak

    2016-07-01

    The bit error rate of an underwater acoustic communication system is related to multipath fading statistics, which determine the signal-to-noise ratio. The amplitude and delay of each path depend on sea surface roughness, propagation medium properties, and source-to-receiver range as a function of frequency. Therefore, received signals will show frequency-dependent fading. A shallow-water acoustic communication channel generally shows a few strong multipaths that interfere with each other and the resulting interference affects the fading statistics model. In this study, frequency-selective fading statistics are modeled on the basis of the phasor representation of the complex path amplitude. The fading statistics distribution is parameterized by the frequency-dependent constructive or destructive interference of multipaths. At a 16 m depth with a muddy bottom, a wave height of 0.2 m, and source-to-receiver ranges of 100 and 400 m, fading statistics tend to show a Rayleigh distribution at a destructive interference frequency, but a Rice distribution at a constructive interference frequency. The theoretical fading statistics well matched the experimental ones.

  14. The Oracle of DEM

    NASA Astrophysics Data System (ADS)

    Gayley, Kenneth

    2013-06-01

    The predictions of the famous Greek oracle of Delphi were just ambiguous enough to seem to convey information, yet the user was only seeing their own thoughts. Are there ways in which X-ray spectral analysis is like that oracle? It is shown using heuristic, generic response functions to mimic actual spectral inversion that the widely known ill conditioning, which makes formal inversion impossible in the presence of random noise, also makes a wide variety of different source distributions (DEMs) produce quite similar X-ray continua and resonance-line fluxes. Indeed, the sole robustly inferable attribute for a thermal, optically thin resonance-line spectrum with normal abundances in CIE is its average temperature. The shape of the DEM distribution, on the other hand, is not well constrained, and may actually depend more on the analysis method, no matter how sophisticated, than on the source plasma. The case is made that X-ray spectra can tell us average temperature, and metallicity, and absorbing column, but the main thing it cannot tell us is the main thing it is most often used to infer: the differential emission measure distribution.

  15. Cosmological constraints from X-ray all sky surveys, from CODEX to eROSITA

    NASA Astrophysics Data System (ADS)

    Finoguenov, A.

    2017-10-01

    Large area cluster cosmology has long become a multiwavelength discipline. Understanding the effect of various selections is currently the main path to improving on the validity of cluster cosmological results. Many of these results are based on the large area sample derived from RASS data. We perform wavelet detection of X-ray sources and make extensive simulations of the detection of clusters in the RASS data. We assign an optical richness to each of the 25,000 detected X-ray sources in the 10,000 square degrees of SDSS BOSS area. We show that there is no obvious separation of sources on galaxy clusters and AGN, based on distribution of systems on their richness. We conclude that previous catalogs, such as MACS, REFLEX are all subject to a complex optical selection function, in addition to an X-ray selection. We provide a complete model of identification of cluster counts are galaxy clusters, which includes chance identification, effect of AGN halo occupation distribution and the thermal emission of ICM. Finally we present the cosmological results obtained using this sample.

  16. Multifunctional voltage source inverter for renewable energy integration and power quality conditioning.

    PubMed

    Dai, NingYi; Lam, Chi-Seng; Zhang, WenChen

    2014-01-01

    In order to utilize the energy from the renewable energy sources, power conversion system is necessary, in which the voltage source inverter (VSI) is usually the last stage for injecting power to the grid. It is an economical solution to add the function of power quality conditioning to the grid-connected VSI in the low-voltage distribution system. Two multifunctional VSIs are studied in this paper, that is, inductive-coupling VSI and capacitive-coupling VSI, which are named after the fundamental frequency impedance of their coupling branch. The operation voltages of the two VSIs are compared when they are used for renewable energy integration and power quality conditioning simultaneously. The operation voltage of the capacitive-coupling VSI can be set much lower than that of the inductive-coupling VSI when reactive power is for compensating inductive loads. Since a large portion of the loads in the distribution system are inductive, the capacitive-coupling VSI is further studied. The design and control method of the multifunctional capacitive-coupling VSI are proposed in this paper. Simulation and experimental results are provided to show its validity.

  17. Spatial distribution of pollutants in the area of the former CHP plant

    NASA Astrophysics Data System (ADS)

    Cichowicz, Robert

    2018-01-01

    The quality of atmospheric air and level of its pollution are now one of the most important issues connected with life on Earth. The frequent nuisance and exceedance of pollution standards often described in the media are generated by both low emission sources and mobile sources. Also local organized energy emission sources such as local boiler houses or CHP plants have impact on air pollution. At the same time it is important to remember that the role of local power stations in shaping air pollution immission fields depends on the height of emitters and functioning of waste gas treatment installations. Analysis of air pollution distribution was carried out in 2 series/dates, i.e. 2 and 10 weeks after closure of the CHP plant. In the analysis as a reference point the largest intersection of streets located in the immediate vicinity of the plant was selected, from which virtual circles were drawn every 50 meters, where 31 measuring points were located. As a result, the impact of carbon dioxide, hydrogen sulfide and ammonia levels could be observed and analyzed, depending on the distance from the street intersection.

  18. Multifunctional Voltage Source Inverter for Renewable Energy Integration and Power Quality Conditioning

    PubMed Central

    Dai, NingYi; Lam, Chi-Seng; Zhang, WenChen

    2014-01-01

    In order to utilize the energy from the renewable energy sources, power conversion system is necessary, in which the voltage source inverter (VSI) is usually the last stage for injecting power to the grid. It is an economical solution to add the function of power quality conditioning to the grid-connected VSI in the low-voltage distribution system. Two multifunctional VSIs are studied in this paper, that is, inductive-coupling VSI and capacitive-coupling VSI, which are named after the fundamental frequency impedance of their coupling branch. The operation voltages of the two VSIs are compared when they are used for renewable energy integration and power quality conditioning simultaneously. The operation voltage of the capacitive-coupling VSI can be set much lower than that of the inductive-coupling VSI when reactive power is for compensating inductive loads. Since a large portion of the loads in the distribution system are inductive, the capacitive-coupling VSI is further studied. The design and control method of the multifunctional capacitive-coupling VSI are proposed in this paper. Simulation and experimental results are provided to show its validity. PMID:25177725

  19. Radiometric analysis of photographic data by the effective exposure method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantine, B J

    1972-04-01

    The effective exposure method provides for radiometric analysis of photographic data. A three-dimensional model, where density is a function of energy and wavelength, is postulated to represent the film response function. Calibration exposures serve to eliminate the other factors which affect image density. The effective exposure causing an image can be determined by comparing the image density with that of a calibration exposure. If the relative spectral distribution of the source is known, irradiance and/or radiance can be unfolded from the effective exposure expression.

  20. Organic Functional Group Composition of Submicron Aerosol Particles at Alert, Nunavut, during 2012-2014

    NASA Astrophysics Data System (ADS)

    Russell, L. M.; Leaitch, W. R.; Liu, J.; Desiree, T. S.; Huang, L.; Sharma, S.; Chivulescu, A.; Veber, D.; Zhang, W.

    2016-12-01

    Long-term measurements of submicron aerosol particle chemical composition and size distributions are essential for evaluating whether global climate models correctly transport particles from lower latitudes to polar regions, especially in the winter months when satellite retrieval of aerosol properties is limited. In collaboration with ongoing measurements by the Dr. Neil Trivett Global Atmospheric Watch observatory at Alert, Nunavut (82.5°N; elevation 185 m-ASL), we measured the organic functional group composition of submicron aerosol particles sampled from the 10-m inlet from April 2012 to October 2014. The sampling site is approximately 10 km from the Alert station, and vehicle traffic is restricted except when filter sampling is stopped, making the impact of local emissions on submicron particle mass concentrations small. The organic functional group (OFG) composition is measured by Fourier Transform Infrared spectroscopy of samples collected on pre-loaded Teflon filters and stored and shipped frozen to La Jolla, California, for analysis. Samples were collected weekly to complement the twice hourly online measurements of non-refractory organic and inorganic composition by an Aerodyne ACSM. Organic components are shown to contribute a substantial fraction of the measured aerosol submicron mass year round. These measurements illustrate the seasonal contributions to the aerosol size distribution from OFG and illustrate the potential sources of the OFG at this remote site. The three largest OFG sources are transported fossil fuel combustion emissions from lower latitudes, sea spray and other marine particles, and episodic contributions from wildfires, volcanoes, and other high-latitude events. These sources are similar to those identified from earlier OFG measurements at Barrow, Alaska, and during the ICEALOT cruise in the Arctic Ocean.

  1. α7 nicotinic ACh receptors as a ligand-gated source of Ca(2+) ions: the search for a Ca(2+) optimum.

    PubMed

    Uteshev, Victor V

    2012-01-01

    The spatiotemporal distribution of cytosolic Ca(2+) ions is a key determinant of neuronal behavior and survival. Distinct sources of Ca(2+) ions including ligand- and voltage-gated Ca(2+) channels contribute to intracellular Ca(2+) homeostasis. Many normal physiological and therapeutic neuronal functions are Ca(2+)-dependent, however an excess of cytosolic Ca(2+) or a lack of the appropriate balance between Ca(2+) entry and clearance may destroy cellular integrity and cause cellular death. Therefore, the existence of optimal spatiotemporal patterns of cytosolic Ca(2+) elevations and thus, optimal activation of ligand- and voltage-gated Ca(2+) ion channels are postulated to benefit neuronal function and survival. Alpha7 nicotinic -acetylcholine receptors (nAChRs) are highly permeable to Ca(2+) ions and play an important role in modulation of neurotransmitter release, gene expression and neuroprotection in a variety of neuronal and non-neuronal cells. In this review, the focus is placed on α7 nAChR-mediated currents and Ca(2+) influx and how this source of Ca(2+) entry compares to NMDA receptors in supporting cytosolic Ca(2+) homeostasis, neuronal function and survival.

  2. THE HIGHEST-ENERGY COSMIC RAYS CANNOT BE DOMINANTLY PROTONS FROM STEADY SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Ke; Kotera, Kumiko

    The bulk of observed ultrahigh-energy cosmic rays could be light or heavier elements and originate from an either steady or transient population of sources. This leaves us with four general categories of sources. Energetic requirements set a lower limit on single-source luminosities, while the distribution of particle arrival directions in the sky sets a lower limit on the source number density. The latter constraint depends on the angular smearing in the skymap due to the magnetic deflections of the charged particles during their propagation from the source to the Earth. We contrast these limits with the luminosity functions from surveysmore » of existing luminous steady objects in the nearby universe and strongly constrain one of the four categories of source models, namely, steady proton sources. The possibility that cosmic rays with energy >8 × 10{sup 19} eV are dominantly pure protons coming from steady sources is excluded at 95% confidence level, under the safe assumption that protons experience less than 30° magnetic deflection on flight.« less

  3. Target mass effects in parton quasi-distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, A. V.

    We study the impact of non-zero (and apparently large) value of the nucleon mass M on the shape of parton quasi-distributions Q(y,p 3), in particular on its change with the change of the nucleon momentum p 3. We observe that the usual target-mass corrections induced by the M-dependence of the twist-2 operators are rather small. Moreover, we show that within the framework based on parametrizations by transverse momentum dependent distribution functions (TMDs) these corrections are canceled by higher-twist contributions. Lastly, we identify a novel source of kinematic target-mass dependence of TMDs and build models corrected for such dependence. We findmore » that resulting changes may be safely neglected for p 3≳2M.« less

  4. Target mass effects in parton quasi-distributions

    DOE PAGES

    Radyushkin, A. V.

    2017-05-11

    We study the impact of non-zero (and apparently large) value of the nucleon mass M on the shape of parton quasi-distributions Q(y,p 3), in particular on its change with the change of the nucleon momentum p 3. We observe that the usual target-mass corrections induced by the M-dependence of the twist-2 operators are rather small. Moreover, we show that within the framework based on parametrizations by transverse momentum dependent distribution functions (TMDs) these corrections are canceled by higher-twist contributions. Lastly, we identify a novel source of kinematic target-mass dependence of TMDs and build models corrected for such dependence. We findmore » that resulting changes may be safely neglected for p 3≳2M.« less

  5. First Results on Angular Distributions of Thermal Dileptons in Nuclear Collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnaldi, R.; Colla, A.; Cortese, P.

    The NA60 experiment at the CERN Super Proton Synchrotron has studied dimuon production in 158A GeV In-In collisions. The strong excess of pairs above the known sources found in the complete mass region 0.2

  6. Method and apparatus for reducing the harmonic currents in alternating-current distribution networks

    DOEpatents

    Beverly, Leon H.; Hance, Richard D.; Kristalinski, Alexandr L.; Visser, Age T.

    1996-01-01

    An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer.

  7. Method and apparatus for reducing the harmonic currents in alternating-current distribution networks

    DOEpatents

    Beverly, L.H.; Hance, R.D.; Kristalinski, A.L.; Visser, A.T.

    1996-11-19

    An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer. 23 figs.

  8. Spot size measurement of a flash-radiography source using the pinhole imaging method

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Qin; Chen, Nan; Cheng, Jin-Ming; Xie, Yu-Tong; Liu, Yun-Long; Long, Quan-Hong

    2016-07-01

    The spot size of the X-ray source is a key parameter of a flash-radiography facility, and is usually quoted as an evaluation of the resolving power. The pinhole imaging technique is applied to measure the spot size of the Dragon-I linear induction accelerator, by which a two-dimensional spatial distribution of the source spot is obtained. Experimental measurements are performed to measure the spot image when the transportation and focusing of the electron beam are tuned by adjusting the currents of solenoids in the downstream section. The spot size of full-width at half maximum and that defined from the spatial frequency at half peak value of the modulation transfer function are calculated and discussed.

  9. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  10. Inference of relativistic electron spectra from measurements of inverse Compton radiation

    NASA Astrophysics Data System (ADS)

    Craig, I. J. D.; Brown, J. C.

    1980-07-01

    The inference of relativistic electron spectra from spectral measurement of inverse Compton radiation is discussed for the case where the background photon spectrum is a Planck function. The problem is formulated in terms of an integral transform that relates the measured spectrum to the unknown electron distribution. A general inversion formula is used to provide a quantitative assessment of the information content of the spectral data. It is shown that the observations must generally be augmented by additional information if anything other than a rudimentary two or three parameter model of the source function is to be derived. It is also pointed out that since a similar equation governs the continuum spectra emitted by a distribution of black-body radiators, the analysis is relevant to the problem of stellar population synthesis from galactic spectra.

  11. Evaluation of the Thermodynamic Consistency of Closure Approximations in Several Models Proposed for the Description of Liquid Crystalline Dynamics

    NASA Astrophysics Data System (ADS)

    Edwards, Brian J.

    2002-05-01

    Given the premise that a set of dynamical equations must possess a definite, underlying mathematical structure to ensure local and global thermodynamic stability, as has been well documented, several different models for describing liquid crystalline dynamics are examined with respect to said structure. These models, each derived during the past several years using a specific closure approximation for the fourth moment of the distribution function in Doi's rigid rod theory, are all shown to be inconsistent with this basic mathematical structure. The source of this inconsistency lies in Doi's expressions for the extra stress tensor and temporal evolution of the order parameter, which are rederived herein using a transformation that allows for internal compatibility with the underlying mathematical structure that is present on the distribution function level of description.

  12. Understanding EROS2 observations toward the spiral arms within a classical Galactic model framework

    NASA Astrophysics Data System (ADS)

    Moniez, M.; Sajadian, S.; Karami, M.; Rahvar, S.; Ansari, R.

    2017-08-01

    Aims: EROS (Expérience de Recherche d'Objets Sombres) has searched for microlensing toward four directions in the Galactic plane away from the Galactic center. The interpretation of the catalog optical depth is complicated by the spread of the source distance distribution. We compare the EROS microlensing observations with Galactic models (including the Besançon model), tuned to fit the EROS source catalogs, and take into account all observational data such as the microlensing optical depth, the Einstein crossing durations, and the color and magnitude distributions of the catalogued stars. Methods: We simulated EROS-like source catalogs using the HIgh-Precision PARallax COllecting Satellite (Hipparcos) database, the Galactic mass distribution, and an interstellar extinction table. Taking into account the EROS star detection efficiency, we were able to produce simulated color-magnitude diagrams that fit the observed diagrams. This allows us to estimate average microlensing optical depths and event durations that are directly comparable with the measured values. Results: Both the Besançon model and our Galactic model allow us to fully understand the EROS color-magnitude data. The average optical depths and mean event durations calculated from these models are in reasonable agreement with the observations. Varying the Galactic structure parameters through simulation, we were also able to deduce contraints on the kinematics of the disk, the disk stellar mass function (at a few kpc distance from the Sun), and the maximum contribution of a thick disk of compact objects in the Galactic plane (Mthick< 5 - 7 × 1010M⊙ at 95%, depending on the model). We also show that the microlensing data toward one of our monitored directions are significantly sensitive to the Galactic bar parameters, although much larger statistics are needed to provide competitive constraints. Conclusions: Our simulation gives a better understanding of the lens and source spatial distributions in the microlensing events. The goodness of a global fit taking into account all the observables (from the color-magnitude diagrams and microlensing observations) shows the validity of the Galactic models. Our tests with the parameters excursions show the unique sensitivity of the microlensing data to the kinematical parameters and stellar initial mass function. http://www.lal.in2p3.fr/recherche/eros

  13. Carnivore-specific SINEs (Can-SINEs): distribution, evolution, and genomic impact.

    PubMed

    Walters-Conte, Kathryn B; Johnson, Diana L E; Allard, Marc W; Pecon-Slattery, Jill

    2011-01-01

    Short interspersed nuclear elements (SINEs) are a type of class 1 transposable element (retrotransposon) with features that allow investigators to resolve evolutionary relationships between populations and species while providing insight into genome composition and function. Characterization of a Carnivora-specific SINE family, Can-SINEs, has, has aided comparative genomic studies by providing rare genomic changes, and neutral sequence variants often needed to resolve difficult evolutionary questions. In addition, Can-SINEs constitute a significant source of functional diversity with Carnivora. Publication of the whole-genome sequence of domestic dog, domestic cat, and giant panda serves as a valuable resource in comparative genomic inferences gleaned from Can-SINEs. In anticipation of forthcoming studies bolstered by new genomic data, this review describes the discovery and characterization of Can-SINE motifs as well as describes composition, distribution, and effect on genome function. As the contribution of noncoding sequences to genomic diversity becomes more apparent, SINEs and other transposable elements will play an increasingly large role in mammalian comparative genomics.

  14. Carnivore-Specific SINEs (Can-SINEs): Distribution, Evolution, and Genomic Impact

    PubMed Central

    Johnson, Diana L.E.; Allard, Marc W.; Pecon-Slattery, Jill

    2011-01-01

    Short interspersed nuclear elements (SINEs) are a type of class 1 transposable element (retrotransposon) with features that allow investigators to resolve evolutionary relationships between populations and species while providing insight into genome composition and function. Characterization of a Carnivora-specific SINE family, Can-SINEs, has, has aided comparative genomic studies by providing rare genomic changes, and neutral sequence variants often needed to resolve difficult evolutionary questions. In addition, Can-SINEs constitute a significant source of functional diversity with Carnivora. Publication of the whole-genome sequence of domestic dog, domestic cat, and giant panda serves as a valuable resource in comparative genomic inferences gleaned from Can-SINEs. In anticipation of forthcoming studies bolstered by new genomic data, this review describes the discovery and characterization of Can-SINE motifs as well as describes composition, distribution, and effect on genome function. As the contribution of noncoding sequences to genomic diversity becomes more apparent, SINEs and other transposable elements will play an increasingly large role in mammalian comparative genomics. PMID:21846743

  15. Evaluation of the image quality of telescopes using the star test

    NASA Astrophysics Data System (ADS)

    Vazquez y Monteil, Sergio; Salazar Romero, Marcos A.; Gale, David M.

    2004-10-01

    The Point Spread Function (PSF) or star test is one of the main criteria to be considered in the quality of the image formed by a telescope. In a real system the distribution of irradiance in the image of a point source is given by the PSF, a function which is highly sensitive to aberrations. The PSF of a telescope may be determined by measuring the intensity distribution in the image of a star. Alternatively, if we already know the aberrations present in the optical system, then we may use diffraction theory to calculate the function. In this paper we propose a method for determining the wavefront aberrations from the PSF, using Genetic Algorithms to perform an optimization process starting from the PSF instead of the more traditional method of adjusting an aberration polynomial. We show that this method of phase recuperation is immune to noise-induced errors arising during image aquisition and registration. Some practical results are shown.

  16. A radially resolved kinetic model for nonlocal electron ripple diffusion losses in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Scott

    A relatively simple radially resolved kinetic model is applied to the ripple diffusion problem for electrons in tokamaks. The distribution function f(r,v) is defined on a two-dimensional grid, where r is the radial coordinate and v is the velocity coordinate. Particle transport in the radial direction is from ripple and banana diffusion and transport in the velocity direction is described by the Fokker-Planck equation. Particles and energy are replaced by source functions that are adjusted to maintain a constant central density and temperature. The relaxed profiles of f(r,v) show that the electron distribution function at the wall contains suprathermal electronsmore » that have diffused from the interior that enhance ripple transport. The transport at the periphery is therefore nonlocal. The energy replacement times from the computational model are near to the experimental replacement times for tokamak discharges in the compilation by Pfeiffer and Waltz [Nucl. Fusion 19, 51 (1979)].« less

  17. Unfolding the neutron spectrum of a NE213 scintillator using artificial neural networks.

    PubMed

    Sharghi Ido, A; Bonyadi, M R; Etaati, G R; Shahriari, M

    2009-10-01

    Artificial neural networks technology has been applied to unfold the neutron spectra from the pulse height distribution measured with NE213 liquid scintillator. Here, both the single and multi-layer perceptron neural network models have been implemented to unfold the neutron spectrum from an Am-Be neutron source. The activation function and the connectivity of the neurons have been investigated and the results have been analyzed in terms of the network's performance. The simulation results show that the neural network that utilizes the Satlins transfer function has the best performance. In addition, omitting the bias connection of the neurons improve the performance of the network. Also, the SCINFUL code is used for generating the response functions in the training phase of the process. Finally, the results of the neural network simulation have been compared with those of the FORIST unfolding code for both (241)Am-Be and (252)Cf neutron sources. The results of neural network are in good agreement with FORIST code.

  18. Two-Flux Green's Function Analysis for Transient Spectral Radiation in a Composite

    NASA Technical Reports Server (NTRS)

    Siegel, Robert

    1996-01-01

    An analysis is developed for obtaining transient temperatures in a two-layer semitransparent composite with spectrally dependent properties. Each external boundary of the composite is subjected to radiation and convection. The two-flux radiative transfer equations are solved by deriving a Green's function. This yields the local radiative heat source needed to numerically solve the transient energy equation. An advantage of the two-flux method is that isotropic scattering is included without added complexity. The layer refractive indices are larger than one. This produces internal reflections at the boundaries and the internal interface; the reflections are assumed diffuse. Spectral results using the Green's function method are verified by comparing with numerical solutions using the exact radiative transfer equations. Transient temperature distributions are given to illustrate the effect of radiative heating on one side of a composite with external convective cooling. The protection of a material from incident radiation is illustrated by adding scattering to the layer adjacent to the radiative source.

  19. Attenuation - The Ugly Stepsister of Velocity in the Noise Correlation Family

    NASA Astrophysics Data System (ADS)

    Lawrence, J. F.; Prieto, G.; Denolle, M.; Seats, K. J.

    2012-12-01

    Noise correlation functions and noise transfer functions have shown in practice to preserve the relative amplitude information, despite the challenge to reliably resolve it compared to phase information. Yet amplitude contains important information about wavefield interactions with the subsurface structure, including focusing/defocusing and seismic attenuation. To focus on the anelastic effects, or attenuation, we measure amplitude decay with increased station separation (distance). We present numerical results showing that the noise correlation functions (NCFs) preserve the relative amplitude information and properly retrieve seismic attenuation for sufficient noise source distribution and appropriate processing. Attenuation is only preserved through the relative decay of distinct waves from multiple simultaneous source locations. With appropriate whitening (and no time domain normalization), the coherency preserves correlation amplitudes proportional to the relative decay expected with all the inter-station spacing. We present new attenuation results for the United States, and particularly the Yellowstone region that illustrate lateral variations that strongly correlate with known geological features such as sedimentary basins, crustal blocks and active volcanism.

  20. Luminosity function and cosmological evolution of X-ray selected quasars

    NASA Technical Reports Server (NTRS)

    Maccacaro, T.; Gioia, I. M.

    1983-01-01

    The preliminary analysis of a complete sample of 55 X-ray sources is presented as part of the Medium Sensitivity Survey of the Einstein Observatory. A pure luminosity evolutionary law is derived in order to determine the uniform distribution of the sources and the rates of evolution for Active Galactic Nuclei (AGNs) observed by X-ray and optical techniques are compared. A nonparametric representation of the luminosity function is fitted to the observational data. On the basis of the reduced data, it is determined that: (1) AGNs evolve cosmologically; (2) less evolution is required to explain the X-ray data than the optical data; (3) the high-luminosity portion of the X-ray luminosity can be described by a power-law with a slope of gamma = 3.6; and (4) the X-ray luminosity function flattens at low luminosities. Some of the implications of the results for conventional theoretical models of the evolution of quasars and Seyfert galaxies are discussed.

  1. A numerical study on dual-phase-lag model of bio-heat transfer during hyperthermia treatment.

    PubMed

    Kumar, P; Kumar, Dinesh; Rai, K N

    2015-01-01

    The success of hyperthermia in the treatment of cancer depends on the precise prediction and control of temperature. It was absolutely a necessity for hyperthermia treatment planning to understand the temperature distribution within living biological tissues. In this paper, dual-phase-lag model of bio-heat transfer has been studied using Gaussian distribution source term under most generalized boundary condition during hyperthermia treatment. An approximate analytical solution of the present problem has been done by Finite element wavelet Galerkin method which uses Legendre wavelet as a basis function. Multi-resolution analysis of Legendre wavelet in the present case localizes small scale variations of solution and fast switching of functional bases. The whole analysis is presented in dimensionless form. The dual-phase-lag model of bio-heat transfer has compared with Pennes and Thermal wave model of bio-heat transfer and it has been found that large differences in the temperature at the hyperthermia position and time to achieve the hyperthermia temperature exist, when we increase the value of τT. Particular cases when surface subjected to boundary condition of 1st, 2nd and 3rd kind are discussed in detail. The use of dual-phase-lag model of bio-heat transfer and finite element wavelet Galerkin method as a solution method helps in precise prediction of temperature. Gaussian distribution source term helps in control of temperature during hyperthermia treatment. So, it makes this study more useful for clinical applications. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Near Earth Inner-Source and Interstellar Pickup Ions Observed with the Hot Plasma Composition Analyzer of the Magnetospheric Multiscale Mission Mms-Hpca

    NASA Astrophysics Data System (ADS)

    Gomez, R. G.; Fuselier, S. A.; Mukherjee, J.; Gonzalez, C. A.

    2017-12-01

    Pickup ions found near the earth are generally picked up in the rest frame of the solar wind, and propagate radially outward from their point of origin. While propagating, they simultaneously gyrate about the magnetic field. Pickup ions come in two general populations; interstellar and inner source ions. Interstellar ions originate in the interstellar medium, enter the solar system in a neutral charge state, are gravitationally focused on the side of the sun opposite their arrival direction and, are ionized when they travel near the sun. Inner-source ions originate at a location within the solar system and between the sun and the observation point. Both pickup ion populations share similarities in composition and charge states, so measuring of their dynamics, using their velocity distribution functions, f(v)'s, is absolutely essential to distinguishing them, and to determining their spatial and temporal origins. Presented here will be the results of studies conducted with the four Hot Plasma Composition Analyzers of the Magnetospheric Multiscale Mission (MMS-HPCA). These instruments measure the full sky (4π steradians) distribution functions of near earth plasmas at a 10 second cadence in an energy-to-charge range 0.001-40 keV/e. The instruments are also capable of parsing this combined energy-solid angle phase space with 22.5° resolution polar angle, and 11.25° in azimuthal angle, allowing for clear measurement of the pitch angle scattering of the ions.

  3. Spatio-temporal reconstruction of brain dynamics from EEG with a Markov prior.

    PubMed

    Hansen, Sofie Therese; Hansen, Lars Kai

    2017-03-01

    Electroencephalography (EEG) can capture brain dynamics in high temporal resolution. By projecting the scalp EEG signal back to its origin in the brain also high spatial resolution can be achieved. Source localized EEG therefore has potential to be a very powerful tool for understanding the functional dynamics of the brain. Solving the inverse problem of EEG is however highly ill-posed as there are many more potential locations of the EEG generators than EEG measurement points. Several well-known properties of brain dynamics can be exploited to alleviate this problem. More short ranging connections exist in the brain than long ranging, arguing for spatially focal sources. Additionally, recent work (Delorme et al., 2012) argues that EEG can be decomposed into components having sparse source distributions. On the temporal side both short and long term stationarity of brain activation are seen. We summarize these insights in an inverse solver, the so-called "Variational Garrote" (Kappen and Gómez, 2013). Using a Markov prior we can incorporate flexible degrees of temporal stationarity. Through spatial basis functions spatially smooth distributions are obtained. Sparsity of these are inherent to the Variational Garrote solver. We name our method the MarkoVG and demonstrate its ability to adapt to the temporal smoothness and spatial sparsity in simulated EEG data. Finally a benchmark EEG dataset is used to demonstrate MarkoVG's ability to recover non-stationary brain dynamics. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Ambient noise tomography with non-uniform noise sources and low aperture networks: case study of deep geothermal reservoirs in northern Alsace, France

    NASA Astrophysics Data System (ADS)

    Lehujeur, Maximilien; Vergne, Jérôme; Maggi, Alessia; Schmittbuhl, Jean

    2017-01-01

    We developed and applied a method for ambient noise surface wave tomography that can deal with noise cross-correlation functions governed to first order by a non-uniform distribution of the ambient seismic noise sources. The method inverts the azimuthal distribution of noise sources that are assumed to be far from the network, together with the spatial variations of the phase and group velocities on an optimized irregular grid. Direct modelling of the two-sided noise correlation functions avoids dispersion curve picking on every station pair and minimizes analyst intervention. The method involves station pairs spaced by distances down to a fraction of a wavelength, thereby bringing additional information for tomography. After validating the method on synthetic data, we applied it to a set of long-term continuous waveforms acquired around the geothermal sites at Soultz-sous-Forêts and Rittershoffen (Northern Alsace, France). For networks with limited aperture, we show that taking the azimuthal variations of the noise energy into account has significant impact on the surface wave dispersion maps. We obtained regional phase and group velocity models in the 1-7 s period range, which is sensitive to the structures encompassing the geothermal reservoirs. The ambient noise in our dataset originates from two main directions, the northern Atlantic Ocean and the Mediterranean Sea, and is dominated by the first Rayleigh wave overtone in the 2-5 s period range.

  5. Photoelectron Energy Loss in Al(002) Revisited: Retrieval of the Single Plasmon Loss Energy Distribution by a Fourier Transform Method

    NASA Astrophysics Data System (ADS)

    Santana, Victor Mancir da Silva; David, Denis; de Almeida, Jailton Souza; Godet, Christian

    2018-06-01

    A Fourier transform (FT) algorithm is proposed to retrieve the energy loss function (ELF) of solid surfaces from experimental X-ray photoelectron spectra. The intensity measured over a broad energy range towards lower kinetic energies results from convolution of four spectral distributions: photoemission line shape, multiple plasmon loss probability, X-ray source line structure and Gaussian broadening of the photoelectron analyzer. The FT of the measured XPS spectrum, including the zero-loss peak and all inelastic scattering mechanisms, being a mathematical function of the respective FT of X-ray source, photoemission line shape, multiple plasmon loss function, and Gaussian broadening of the photoelectron analyzer, the proposed algorithm gives straightforward access to the bulk ELF and effective dielectric function of the solid, assuming identical ELF for intrinsic and extrinsic plasmon excitations. This method is applied to aluminum single crystal Al(002) where the photoemission line shape has been computed accurately beyond the Doniach-Sunjic approximation using the Mahan-Wertheim-Citrin approach which takes into account the density of states near the Fermi level; the only adjustable parameters are the singularity index and the broadening energy D (inverse hole lifetime). After correction for surface plasmon excitations, the q-averaged bulk loss function, q , of Al(002) differs from the optical value Im[- 1 / ɛ( E, q = 0)] and is well described by the Lindhard-Mermin dispersion relation. A quality criterion of the inversion algorithm is given by the capability of observing weak interband transitions close to the zero-loss peak, namely at 0.65 and 1.65 eV in ɛ( E, q) as found in optical spectra and ab initio calculations of aluminum.

  6. Photoelectron Energy Loss in Al(002) Revisited: Retrieval of the Single Plasmon Loss Energy Distribution by a Fourier Transform Method

    NASA Astrophysics Data System (ADS)

    Santana, Victor Mancir da Silva; David, Denis; de Almeida, Jailton Souza; Godet, Christian

    2018-04-01

    A Fourier transform (FT) algorithm is proposed to retrieve the energy loss function (ELF) of solid surfaces from experimental X-ray photoelectron spectra. The intensity measured over a broad energy range towards lower kinetic energies results from convolution of four spectral distributions: photoemission line shape, multiple plasmon loss probability, X-ray source line structure and Gaussian broadening of the photoelectron analyzer. The FT of the measured XPS spectrum, including the zero-loss peak and all inelastic scattering mechanisms, being a mathematical function of the respective FT of X-ray source, photoemission line shape, multiple plasmon loss function, and Gaussian broadening of the photoelectron analyzer, the proposed algorithm gives straightforward access to the bulk ELF and effective dielectric function of the solid, assuming identical ELF for intrinsic and extrinsic plasmon excitations. This method is applied to aluminum single crystal Al(002) where the photoemission line shape has been computed accurately beyond the Doniach-Sunjic approximation using the Mahan-Wertheim-Citrin approach which takes into account the density of states near the Fermi level; the only adjustable parameters are the singularity index and the broadening energy D (inverse hole lifetime). After correction for surface plasmon excitations, the q-averaged bulk loss function, q , of Al(002) differs from the optical value Im[- 1 / ɛ(E, q = 0)] and is well described by the Lindhard-Mermin dispersion relation. A quality criterion of the inversion algorithm is given by the capability of observing weak interband transitions close to the zero-loss peak, namely at 0.65 and 1.65 eV in ɛ(E, q) as found in optical spectra and ab initio calculations of aluminum.

  7. LUMINOSITY FUNCTIONS OF SPITZER-IDENTIFIED PROTOSTARS IN NINE NEARBY MOLECULAR CLOUDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kryukova, E.; Megeath, S. T.; Allen, T. S.

    2012-08-15

    We identify protostars in Spitzer surveys of nine star-forming (SF) molecular clouds within 1 kpc: Serpens, Perseus, Ophiuchus, Chamaeleon, Lupus, Taurus, Orion, Cep OB3, and Mon R2, which combined host over 700 protostar candidates. These clouds encompass a variety of SF environments, including both low-mass and high-mass SF regions, as well as dense clusters and regions of sparsely distributed star formation. Our diverse cloud sample allows us to compare protostar luminosity functions in these varied environments. We combine near- and mid-infrared photometry from the Two Micron All Sky Survey and Spitzer to create 1-24 {mu}m spectral energy distributions (SEDs). Usingmore » protostars from the c2d survey with well-determined bolometric luminosities, we derive a relationship between bolometric luminosity, mid-IR luminosity (integrated from 1-24 {mu}m), and SED slope. Estimations of the bolometric luminosities for protostar candidates are combined to create luminosity functions for each cloud. Contamination due to edge-on disks, reddened Class II sources, and galaxies is estimated and removed from the luminosity functions. We find that luminosity functions for high-mass SF clouds (Orion, Mon R2, and Cep OB3) peak near 1 L{sub Sun} and show a tail extending toward luminosities above 100 L{sub Sun }. The luminosity functions of the low-mass SF clouds (Serpens, Perseus, Ophiuchus, Taurus, Lupus, and Chamaeleon) do not exhibit a common peak, however the combined luminosity function of these regions peaks below 1 L{sub Sun }. Finally, we examine the luminosity functions as a function of the local surface density of young stellar objects. In the Orion molecular clouds, we find a significant difference between the luminosity functions of protostars in regions of high and low stellar density, the former of which is biased toward more luminous sources. This may be the result of primordial mass segregation, although this interpretation is not unique. We compare our luminosity functions to those predicted by models and find that our observed luminosity functions are best matched by models that invoke competitive accretion, although we do not find strong agreement between the high-mass SF clouds and any of the models.« less

  8. AS SPECIATION AND EFFECTS OF PH AND PHOSPHATE ON THE MOBILIZATION OF AS IN SOILS FROM A LEAD SMELTING SITE. PUBLISHED IN ADVANCED PHOTON SOURCE ACTIVITY REPORT 2003.

    EPA Science Inventory

    Arsenic in soils from the Asarco lead smelter in East Helena, Montana was characterized by X-ray absorption spectroscopy (XAS). Arsenic oxidation state and mineralogy were analyzed as a function of depth and surface distribution using bulk and microprobe XAS. These results were c...

  9. Positional Distribution of Fatty Acids in Triacylglycerols and Phospholipids from Fillets of Atlantic Salmon (Salmo Salar) Fed Vegetable and Fish Oil Blends.

    PubMed

    Ruiz-Lopez, Noemi; Stubhaug, Ingunn; Ipharraguerre, Ignacio; Rimbach, Gerald; Menoyo, David

    2015-07-10

    The nutritional and functional characteristics of dietary fat are related to the fatty acid (FA) composition and its positional distribution in the triacylglycerol (TAG) fraction. Atlantic salmon is an important source of healthy long chain omega 3 FA (particularly, eicosapentaenoic (EPA) and docoxahexaenoic (DHA) acids). However, the impact of lipid sources in salmon feeds on the regiospecificity of FA in the fish TAG remains to be explored. The present study determines the effect of feeding salmon with blends of palm, rapeseed, and fish oil, providing two different EPA + DHA concentrations (high: H-ED 10.3% and low: L-ED 4.6%) on the fillet lipid class composition and the positional distribution of FA in TAG and phospholipids. The regiospecific analysis of fillet TAG showed that around 50% of the EPA and around 80% of DHA was located in the sn-2 position. The positional distribution of FA in phosphatidylcholine (PC), showed that around 80% of the EPA and around 90% of DHA were located in the sn-2. Fish fed the vegetable-rich diets showed higher EPA in the sn-2 position in PC (77% vs. 83% in the H-ED and L-ED diets, respectively) but similar DHA concentrations. It is concluded that feeding salmon with different EPA + DHA concentrations does not affect their positional distribution in the fillet TAG.

  10. Positional Distribution of Fatty Acids in Triacylglycerols and Phospholipids from Fillets of Atlantic Salmon (Salmo Salar) Fed Vegetable and Fish Oil Blends

    PubMed Central

    Ruiz-Lopez, Noemi; Stubhaug, Ingunn; Ipharraguerre, Ignacio; Rimbach, Gerald; Menoyo, David

    2015-01-01

    The nutritional and functional characteristics of dietary fat are related to the fatty acid (FA) composition and its positional distribution in the triacylglycerol (TAG) fraction. Atlantic salmon is an important source of healthy long chain omega 3 FA (particularly, eicosapentaenoic (EPA) and docoxahexaenoic (DHA) acids). However, the impact of lipid sources in salmon feeds on the regiospecificity of FA in the fish TAG remains to be explored. The present study determines the effect of feeding salmon with blends of palm, rapeseed, and fish oil, providing two different EPA + DHA concentrations (high: H-ED 10.3% and low: L-ED 4.6%) on the fillet lipid class composition and the positional distribution of FA in TAG and phospholipids. The regiospecific analysis of fillet TAG showed that around 50% of the EPA and around 80% of DHA was located in the sn-2 position. The positional distribution of FA in phosphatidylcholine (PC), showed that around 80% of the EPA and around 90% of DHA were located in the sn-2. Fish fed the vegetable-rich diets showed higher EPA in the sn-2 position in PC (77% vs. 83% in the H-ED and L-ED diets, respectively) but similar DHA concentrations. It is concluded that feeding salmon with different EPA + DHA concentrations does not affect their positional distribution in the fillet TAG. PMID:26184234

  11. Optimization of Compton Source Performance through Electron Beam Shaping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyzhenkov, Alexander; Yampolsky, Nikolai

    2016-09-26

    We investigate a novel scheme for significantly increasing the brightness of x-ray light sources based on inverse Compton scattering (ICS) - scattering laser pulses off relativistic electron beams. The brightness of ICS sources is limited by the electron beam quality since electrons traveling at different angles, and/or having different energies, produce photons with different energies. Therefore, the spectral brightness of the source is defined by the 6d electron phase space shape and size, as well as laser beam parameters. The peak brightness of the ICS source can be maximized then if the electron phase space is transformed in a waymore » so that all electrons scatter off the x-ray photons of same frequency in the same direction, arriving to the observer at the same time. We describe the x-ray photon beam quality through the Wigner function (6d photon phase space distribution) and derive it for the ICS source when the electron and laser rms matrices are arbitrary.« less

  12. Towards Noise Tomography and Passive Monitoring Using Distributed Acoustic Sensing

    NASA Astrophysics Data System (ADS)

    Paitz, P.; Fichtner, A.

    2017-12-01

    Distributed Acoustic Sensing (DAS) has the potential to revolutionize the field of seismic data acquisition. Thanks to their cost-effectiveness, fiber-optic cables may have the capability of complementing conventional geophones and seismometers by filling a niche of applications utilizing large amounts of data. Therefore, DAS may serve as an additional tool to investigate the internal structure of the Earth and its changes over time; on scales ranging from hydrocarbon or geothermal reservoirs to the entire globe. An additional potential may be in the existence of large fibre networks deployed already for telecommunication purposes. These networks that already exist today could serve as distributed seismic antennas. We investigate theoretically how ambient noise tomography may be used with DAS data. For this we extend the theory of seismic interferometry to the measurement of strain. With numerical, 2D finite-difference examples we investigate the impact of source and receiver effects. We study the effect of heterogeneous source distributions and the cable orientation by assessing similarities and differences to the Green's function. We also compare the obtained interferometric waveforms from strain interferometry to displacement interferometric wave fields obtained with existing methods. Intermediate results show that the obtained interferometric waveforms can be connected to the Green's Functions and provide consistent information about the propagation medium. These simulations will be extended to reservoir scale subsurface structures. Future work will include the application of the theory to real-data examples. The presented research depicts the early stage of a combination of theoretical investigations, numerical simulations and real-world data applications. We will therefore evaluate the potentials and shortcomings of DAS in reservoir monitoring and seismology at the current state, with a long-term vision of global seismic tomography utilizing DAS data from existing fiber-optic cable networks.

  13. Ignition probability of polymer-bonded explosives accounting for multiple sources of material stochasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu

    2014-05-07

    Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less

  14. Development of a circadian light source

    NASA Astrophysics Data System (ADS)

    Nicol, David B.; Ferguson, Ian T.

    2002-11-01

    Solid state lighting presents a new paradigm for lighting - controllability. Certain characteristics of the lighting environment can be manipulated, because of the possibility of using multiple LEDs of different emission wavelengths as the illumination source. This will provide a new, versatile, general illumination source due to the ability to vary the spectral power distribution. New effects beyond the visual may be achieved that are not possible with conventional light sources. Illumination has long been the primary function of lighting but as the lighting industry has matured the psychological aspects of lighting have been considered by designers; for example, choosing a particular lighting distribution or color variation in retail applications. The next step in the evolution of light is to consider the physiological effects of lighting that cause biological changes in a person within the environment. This work presents the development of a source that may have important bearing on this area of lighting. A circadian light source has been developed to provide an illumination source that works by modulating its correlated color temperature to mimic the changes in natural daylight through the day. In addition, this source can cause or control physiological effects for a person illuminated by it. The importance of this is seen in the human circadian rhythm's peak response corresponding to blue light at ~460 nm which corresponds to the primary spectral difference in increasing color temperature. The device works by adding blue light to a broadband source or mixing polychromatic light to mimic the variation of color temperature observed for the Planckian Locus on the CIE diagram. This device can have several applications including: a tool for researchers in this area, a general illumination lighting technology, and a light therapy device.

  15. Visualization tool for three-dimensional plasma velocity distributions (ISEE_3D) as a plug-in for SPEDAS

    NASA Astrophysics Data System (ADS)

    Keika, Kunihiro; Miyoshi, Yoshizumi; Machida, Shinobu; Ieda, Akimasa; Seki, Kanako; Hori, Tomoaki; Miyashita, Yukinaga; Shoji, Masafumi; Shinohara, Iku; Angelopoulos, Vassilis; Lewis, Jim W.; Flores, Aaron

    2017-12-01

    This paper introduces ISEE_3D, an interactive visualization tool for three-dimensional plasma velocity distribution functions, developed by the Institute for Space-Earth Environmental Research, Nagoya University, Japan. The tool provides a variety of methods to visualize the distribution function of space plasma: scatter, volume, and isosurface modes. The tool also has a wide range of functions, such as displaying magnetic field vectors and two-dimensional slices of distributions to facilitate extensive analysis. The coordinate transformation to the magnetic field coordinates is also implemented in the tool. The source codes of the tool are written as scripts of a widely used data analysis software language, Interactive Data Language, which has been widespread in the field of space physics and solar physics. The current version of the tool can be used for data files of the plasma distribution function from the Geotail satellite mission, which are publicly accessible through the Data Archives and Transmission System of the Institute of Space and Astronautical Science (ISAS)/Japan Aerospace Exploration Agency (JAXA). The tool is also available in the Space Physics Environment Data Analysis Software to visualize plasma data from the Magnetospheric Multiscale and the Time History of Events and Macroscale Interactions during Substorms missions. The tool is planned to be applied to data from other missions, such as Arase (ERG) and Van Allen Probes after replacing or adding data loading plug-ins. This visualization tool helps scientists understand the dynamics of space plasma better, particularly in the regions where the magnetohydrodynamic approximation is not valid, for example, the Earth's inner magnetosphere, magnetopause, bow shock, and plasma sheet.

  16. Synoptic, Global Mhd Model For The Solar Corona

    NASA Astrophysics Data System (ADS)

    Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.

    2007-05-01

    The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.

  17. Using special functions to model the propagation of airborne diseases

    NASA Astrophysics Data System (ADS)

    Bolaños, Daniela

    2014-06-01

    Some special functions of the mathematical physics are using to obtain a mathematical model of the propagation of airborne diseases. In particular we study the propagation of tuberculosis in closed rooms and we model the propagation using the error function and the Bessel function. In the model, infected individual emit pathogens to the environment and this infect others individuals who absorb it. The evolution in time of the concentration of pathogens in the environment is computed in terms of error functions. The evolution in time of the number of susceptible individuals is expressed by a differential equation that contains the error function and it is solved numerically for different parametric simulations. The evolution in time of the number of infected individuals is plotted for each numerical simulation. On the other hand, the spatial distribution of the pathogen around the source of infection is represented by the Bessel function K0. The spatial and temporal distribution of the number of infected individuals is computed and plotted for some numerical simulations. All computations were made using software Computer algebra, specifically Maple. It is expected that the analytical results that we obtained allow the design of treatment rooms and ventilation systems that reduce the risk of spread of tuberculosis.

  18. Characterization of water-soluble organic aerosol in coastal New England: Implications of variations in size distribution

    NASA Astrophysics Data System (ADS)

    Ziemba, L. D.; Griffin, R. J.; Whitlow, S.; Talbot, R. W.

    2011-12-01

    Size distributions up to 10-micron aerosol diameter ( DP) of organic carbon (OC) and water-soluble organic carbon (WSOC) were measured at two sites in coastal New England, slightly inland at Thompson Farm (TF) and offshore at Isles of Shoals (IOS). Significant OC concentrations were measured across the full size distribution at TF and IOS, respectively. The WSOC fraction (WSOC/OC) was largest in the accumulation mode with values of 0.86 and 0.93 and smallest in the coarse mode with values of 0.61 and 0.79 at TF and IOS, respectively. Dicarboxylic acids containing up to five carbon atoms (C 5) were concentrated in droplet and accumulation mode aerosol with only minor contributions in the coarse mode. C 1-C 3 monocarboxylic acids were generally near or below detection limits. Results from proton nuclear magnetic resonance (H +-NMR) spectroscopy analyses showed that the organic functional group characterized by protons in the alpha position to an unsaturated carbon atoms ([H-C-C dbnd ]) was the dominant WSOC functionality at both TF and IOS, constituting 34 and 43% of carbon-weighted H +-NMR signal, respectively. Size distributions of each H +-NMR-resolved organic functionality are presented. Source apportionment using H +-NMR fingerprints is also presented, and results indicate that nearly all of the WSOC at TF and IOS spectroscopically resembled secondary organic aerosol, regardless of DP.

  19. Integrity of central nervous function in diabetes mellitus assessed by resting state EEG frequency analysis and source localization.

    PubMed

    Frøkjær, Jens B; Graversen, Carina; Brock, Christina; Khodayari-Rostamabad, Ahmad; Olesen, Søren S; Hansen, Tine M; Søfteland, Eirik; Simrén, Magnus; Drewes, Asbjørn M

    2017-02-01

    Diabetes mellitus (DM) is associated with structural and functional changes of the central nervous system. We used electroencephalography (EEG) to assess resting state cortical activity and explored associations to relevant clinical features. Multichannel resting state EEG was recorded in 27 healthy controls and 24 patients with longstanding DM and signs of autonomic dysfunction. The power distribution based on wavelet analysis was summarized into frequency bands with corresponding topographic mapping. Source localization analysis was applied to explore the electrical cortical sources underlying the EEG. Compared to controls, DM patients had an overall decreased EEG power in the delta (1-4Hz) and gamma (30-45Hz) bands. Topographic analysis revealed that these changes were confined to the frontal region for the delta band and to central cortical areas for the gamma band. Source localization analysis identified sources with reduced activity in the left postcentral gyrus for the gamma band and in right superior parietal lobule for the alpha1 (8-10Hz) band. DM patients with clinical signs of autonomic dysfunction and gastrointestinal symptoms had evidence of altered resting state cortical processing. This may reflect metabolic, vascular or neuronal changes associated with diabetes. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Tsunami Simulation Method Assimilating Ocean Bottom Pressure Data Near a Tsunami Source Region

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro

    2018-02-01

    A new method was developed to reproduce the tsunami height distribution in and around the source area, at a certain time, from a large number of ocean bottom pressure sensors, without information on an earthquake source. A dense cabled observation network called S-NET, which consists of 150 ocean bottom pressure sensors, was installed recently along a wide portion of the seafloor off Kanto, Tohoku, and Hokkaido in Japan. However, in the source area, the ocean bottom pressure sensors cannot observe directly an initial ocean surface displacement. Therefore, we developed the new method. The method was tested and functioned well for a synthetic tsunami from a simple rectangular fault with an ocean bottom pressure sensor network using 10 arc-min, or 20 km, intervals. For a test case that is more realistic, ocean bottom pressure sensors with 15 arc-min intervals along the north-south direction and sensors with 30 arc-min intervals along the east-west direction were used. In the test case, the method also functioned well enough to reproduce the tsunami height field in general. These results indicated that the method could be used for tsunami early warning by estimating the tsunami height field just after a great earthquake without the need for earthquake source information.

Top