Sample records for distribution functions cdfs

  1. Property Values Associated with the Failure of Individual Links in a System with Multiple Weak and Strong Links.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Representations are developed and illustrated for the distribution of link property values at the time of link failure in the presence of aleatory uncertainty in link properties. The following topics are considered: (i) defining properties for weak links and strong links, (ii) cumulative distribution functions (CDFs) for link failure time, (iii) integral-based derivation of CDFs for link property at time of link failure, (iv) sampling-based approximation of CDFs for link property at time of link failure, (v) verification of integral-based and sampling-based determinations of CDFs for link property at time of link failure, (vi) distributions of link properties conditional onmore » time of link failure, and (vii) equivalence of two different integral-based derivations of CDFs for link property at time of link failure.« less

  2. Combined distribution functions: A powerful tool to identify cation coordination geometries in liquid systems

    NASA Astrophysics Data System (ADS)

    Sessa, Francesco; D'Angelo, Paola; Migliorati, Valentina

    2018-01-01

    In this work we have developed an analytical procedure to identify metal ion coordination geometries in liquid media based on the calculation of Combined Distribution Functions (CDFs) starting from Molecular Dynamics (MD) simulations. CDFs provide a fingerprint which can be easily and unambiguously assigned to a reference polyhedron. The CDF analysis has been tested on five systems and has proven to reliably identify the correct geometries of several ion coordination complexes. This tool is simple and general and can be efficiently applied to different MD simulations of liquid systems.

  3. Local structure in anisotropic systems determined by molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Komolkin, Andrei V.; Maliniak, Arnold

    In the present communication we describe the investigation of local structure using a new visualization technique. The approach is based on two-dimensional pair correlation functions derived from a molecular dynamics computer simulation. We have used this method to analyse a trajectory produced in a simulation of a nematic liquid crystal of 4-n-pentyl-4'-cyanobiphenyl (5CB) (Komolkin et al., 1994, J. chem. Phys., 101, 4103). The molecule is assumed to have cylindrical symmetry, and the liquid crystalline phase is treated as uniaxial. The pair correlation functions or cylindrical distribution functions (CDFs) are calculated in the molecular (m) and laboratory (l) frames, gm2(z1 2, d1 2) and g12(Z1 2, D1 2). Anisotropic molecular organization in the liquid crystal is reflected in laboratory frame CDFs. The molecular excluded volume is determined and the effect of the fast motion in the alkyl chain is observed. The intramolecular distributions are included in the CDFs and indicate the size of the motional amplitude in the chain. Absence of long range order was confirmed, a feature typical for a nematic liquid crystal.

  4. Margins Associated with Loss of Assured Safety for Systems with Multiple Time-Dependent Failure Modes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property valuemore » at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.« less

  5. CDF-XL: computing cumulative distribution functions of reaction time data in Excel.

    PubMed

    Houghton, George; Grange, James A

    2011-12-01

    In experimental psychology, central tendencies of reaction time (RT) distributions are used to compare different experimental conditions. This emphasis on the central tendency ignores additional information that may be derived from the RT distribution itself. One method for analysing RT distributions is to construct cumulative distribution frequency plots (CDFs; Ratcliff, Psychological Bulletin 86:446-461, 1979). However, this method is difficult to implement in widely available software, severely restricting its use. In this report, we present an Excel-based program, CDF-XL, for constructing and analysing CDFs, with the aim of making such techniques more readily accessible to researchers, including students (CDF-XL can be downloaded free of charge from the Psychonomic Society's online archive). CDF-XL functions as an Excel workbook and starts from the raw experimental data, organised into three columns (Subject, Condition, and RT) on an Input Data worksheet (a point-and-click utility is provided for achieving this format from a broader data set). No further preprocessing or sorting of the data is required. With one click of a button, CDF-XL will generate two forms of cumulative analysis: (1) "standard" CDFs, based on percentiles of participant RT distributions (by condition), and (2) a related analysis employing the participant means of rank-ordered RT bins. Both analyses involve partitioning the data in similar ways, but the first uses a "median"-type measure at the participant level, while the latter uses the mean. The results are presented in three formats: (i) by participants, suitable for entry into further statistical analysis; (ii) grand means by condition; and (iii) completed CDF plots in Excel charts.

  6. Representation of analysis results involving aleatory and epistemic uncertainty.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis

    2008-08-01

    Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for themore » representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.« less

  7. A TIME-TRENDS STUDY OF THE OCCURRENCES AND LEVELS OF CDDS, CDFS AND DIOXIN-LIKE PCBS IN SEDIMENT CORES FROM 11 GEOGRAPHICALLY DISTRIBUTED LAKES IN THE U.S.

    EPA Science Inventory

    Polychlorinated dibenzo-p-dioxins (CDDs), polychlorinated dibenzofurans (CDFs) and certain non- and mono-ortho substituted polychlorinated biphenyls (cp-PCBs) are a general class of chlorinated aromatic compounds that are considered as dioxin-like. Because these chemicals are hi...

  8. Career Development Specialties for the 21st Century. Trends and Issues Alert No. 13.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    The need for career development services is growing. One-stop career centers and school-to-work programs have spurred demand for career development facilitators (CDFs). Working under the supervision of a qualified career counselor, CDFs can serve the following functions: career group facilitator, job search trainer, career resource center…

  9. Genome-Wide Identification and Expression Analysis of the Cation Diffusion Facilitator Gene Family in Turnip Under Diverse Metal Ion Stresses.

    PubMed

    Li, Xiong; Wu, Yuansheng; Li, Boqun; He, Wenqi; Yang, Yonghong; Yang, Yongping

    2018-01-01

    The cation diffusion facilitator (CDF) family is one of the gene families involved in metal ion uptake and transport in plants, but the understanding of the definite roles and mechanisms of most CDF genes remain limited. In the present study, we identified 18 candidate CDF genes from the turnip genome and named them BrrMTP1.1 - BrrMTP12 . Then, we performed a comparative genomic analysis on the phylogenetic relationships, gene structures and chromosome distributions, conserved domains, and motifs of turnip CDFs. The constructed phylogenetic tree indicated that the BrrMTPs were divided into seven groups (groups 1, 5, 6, 7, 8, 9, and 12) and formed three major clusters (Zn-CDFs, Fe/Zn-CDFs, and Mn-CDFs). Moreover, the structural characteristics of the BrrMTP members in the same group were similar but varied among groups. To investigate the potential roles of BrrMTPs in turnip, we conducted an expression analysis on all BrrMTP genes under Mg, Zn, Cu, Mn, Fe, Co, Na, and Cd stresses. Results showed that the expression levels of all BrrMTP members were induced by at least one metal ion, indicating that these genes may be related to the tolerance or transport of those metal ions. Based on the roles of different metal ions for plants, we hypothesized that BrrMTP genes are possibly involved in heavy metal accumulation and tolerance to salt stress apart from their roles in the maintenance of mineral nutrient homeostasis in turnip. These findings are helpful to understand the roles of MTPs in plants and provide preliminary information for the study of the functions of BrrMTP genes.

  10. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    DOE PAGES

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less

  11. Quantifying Diffuse Contamination: Method and Application to Pb in Soil.

    PubMed

    Fabian, Karl; Reimann, Clemens; de Caritat, Patrice

    2017-06-20

    A new method for detecting and quantifying diffuse contamination at the continental to regional scale is based on the analysis of cumulative distribution functions (CDFs). It uses cumulative probability (CP) plots for spatially representative data sets, preferably containing >1000 determinations. Simulations demonstrate how different types of contamination influence elemental CDFs of different sample media. It is found that diffuse contamination is characterized by a distinctive shift of the low-concentration end of the distribution of the studied element in its CP plot. Diffuse contamination can be detected and quantified via either (1) comparing the distribution of the contaminating element to that of an element with a geochemically comparable behavior but no contamination source (e.g., Pb vs Rb), or (2) comparing the top soil distribution of an element to the distribution of the same element in subsoil samples from the same area, taking soil forming processes into consideration. Both procedures are demonstrated for geochemical soil data sets from Europe, Australia, and the U.S.A. Several different data sets from Europe deliver comparable results at different scales. Diffuse Pb contamination in surface soil is estimated to be <0.5 mg/kg for Australia, 1-3 mg/kg for Europe, and 1-2 mg/kg, or at least <5 mg/kg, for the U.S.A. The analysis presented here also allows recognition of local contamination sources and can be used to efficiently monitor diffuse contamination at the continental to regional scale.

  12. A new method for detecting, quantifying and monitoring diffuse contamination

    NASA Astrophysics Data System (ADS)

    Fabian, Karl; Reimann, Clemens; de Caritat, Patrice

    2017-04-01

    A new method is presented for detecting and quantifying diffuse contamination at the regional to continental scale. It is based on the analysis of cumulative distribution functions (CDFs) in cumulative probability (CP) plots for spatially representative datasets, preferably containing >1000 samples. Simulations demonstrate how different types of contamination influence elemental CDFs of different sample media. Contrary to common belief, diffuse contamination does not result in exceedingly high element concentrations in regional- to continental-scale datasets. Instead it produces a distinctive shift of concentrations in the background distribution of the studied element resulting in a steeper data distribution in the CP plot. Via either (1) comparing the distribution of an element in top soil samples to the distribution of the same element in bottom soil samples from the same area, taking soil forming processes into consideration, or (2) comparing the distribution of the contaminating element (e.g., Pb) to that of an element with a geochemically comparable behaviour but no contamination source (e.g., Rb or Ba in case of Pb), the relative impact of diffuse contamination on the element concentration can be estimated either graphically in the CP plot via a best fit estimate or quantitatively via a Kolmogorov-Smirnov or Cramer vonMiese test. This is demonstrated using continental-scale geochemical soil datasets from Europe, Australia, and the USA, and a regional scale dataset from Norway. Several different datasets from Europe deliver comparable results at regional to continental scales. The method is also suitable for monitoring diffuse contamination based on the statistical distribution of repeat datasets at the continental scale in a cost-effective manner.

  13. Monitoring Sea Surface Processes Using the High Frequency Ambient Sound Field

    DTIC Science & Technology

    2005-09-30

    time 2.2 sec). This has been identified as a Southern Resident Killer Whale ( Puget Sound ). 6 In coastal and inland waterways, anthropogenic noise...ITCZ 10ºN, 95ºW), 3) Bering Sea coastal shelf, 4) Ionian Sea, 5) Carr Inlet, Puget Sound , Washington, and 6) Haro Strait, Washington/BC. The sound ...and 9 m/s). Figure 8. A comparison of cumulative distribution functions (CDFs) for rain, drizzle and shipping in Carr Inlet, Puget Sound . The

  14. Weak Lensing Study in VOICE Survey II: Shear Bias Calibrations

    NASA Astrophysics Data System (ADS)

    Liu, Dezi; Fu, Liping; Liu, Xiangkun; Radovich, Mario; Wang, Chao; Pan, Chuzhong; Fan, Zuhui; Covone, Giovanni; Vaccari, Mattia; Botticella, Maria Teresa; Capaccioli, Massimo; De Cicco, Demetra; Grado, Aniello; Miller, Lance; Napolitano, Nicola; Paolillo, Maurizio; Pignata, Giuliano

    2018-05-01

    The VST Optical Imaging of the CDFS and ES1 Fields (VOICE) Survey is proposed to obtain deep optical ugri imaging of the CDFS and ES1 fields using the VLT Survey Telescope (VST). At present, the observations for the CDFS field have been completed, and comprise in total about 4.9 deg2 down to rAB ˜ 26 mag. In the companion paper by Fu et al. (2018), we present the weak lensing shear measurements for r-band images with seeing ≤ 0.9 arcsec. In this paper, we perform image simulations to calibrate possible biases of the measured shear signals. Statistically, the properties of the simulated point spread function (PSF) and galaxies show good agreements with those of observations. The multiplicative bias is calibrated to reach an accuracy of ˜3.0%. We study the bias sensitivities to the undetected faint galaxies and to the neighboring galaxies. We find that undetected galaxies contribute to the multiplicative bias at the level of ˜0.3%. Further analysis shows that galaxies with lower signal-to-noise ratio (SNR) are impacted more significantly because the undetected galaxies skew the background noise distribution. For the neighboring galaxies, we find that although most have been rejected in the shape measurement procedure, about one third of them still remain in the final shear sample. They show a larger ellipticity dispersion and contribute to ˜0.2% of the multiplicative bias. Such a bias can be removed by further eliminating these neighboring galaxies. But the effective number density of the galaxies can be reduced considerably. Therefore efficient methods should be developed for future weak lensing deep surveys.

  15. The 2-10 keV unabsorbed luminosity function of AGN from the LSS, CDFS, and COSMOS surveys

    NASA Astrophysics Data System (ADS)

    Ranalli, P.; Koulouridis, E.; Georgantopoulos, I.; Fotopoulou, S.; Hsu, L.-T.; Salvato, M.; Comastri, A.; Pierre, M.; Cappelluti, N.; Carrera, F. J.; Chiappetti, L.; Clerc, N.; Gilli, R.; Iwasawa, K.; Pacaud, F.; Paltani, S.; Plionis, E.; Vignali, C.

    2016-05-01

    The XMM-Large scale structure (XMM-LSS), XMM-Cosmological evolution survey (XMM-COSMOS), and XMM-Chandra deep field south (XMM-CDFS) surveys are complementary in terms of sky coverage and depth. Together, they form a clean sample with the least possible variance in instrument effective areas and point spread function. Therefore this is one of the best samples available to determine the 2-10 keV luminosity function of active galactic nuclei (AGN) and their evolution. The samples and the relevant corrections for incompleteness are described. A total of 2887 AGN is used to build the LF in the luminosity interval 1042-1046 erg s-1 and in the redshift interval 0.001-4. A new method to correct for absorption by considering the probability distribution for the column density conditioned on the hardness ratio is presented. The binned luminosity function and its evolution is determined with a variant of the Page-Carrera method, which is improved to include corrections for absorption and to account for the full probability distribution of photometric redshifts. Parametric models, namely a double power law with luminosity and density evolution (LADE) or luminosity-dependent density evolution (LDDE), are explored using Bayesian inference. We introduce the Watanabe-Akaike information criterion (WAIC) to compare the models and estimate their predictive power. Our data are best described by the LADE model, as hinted by the WAIC indicator. We also explore the recently proposed 15-parameter extended LDDE model and find that this extension is not supported by our data. The strength of our method is that it provides unabsorbed, non-parametric estimates, credible intervals for luminosity function parameters, and a model choice based on predictive power for future data. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA member states and NASA.Tables with the samples of the posterior probability distributions are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/590/A80

  16. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  17. The 5-10 keV AGN luminosity function at 0.01 < z < 4.0

    NASA Astrophysics Data System (ADS)

    Fotopoulou, S.; Buchner, J.; Georgantopoulos, I.; Hasinger, G.; Salvato, M.; Georgakakis, A.; Cappelluti, N.; Ranalli, P.; Hsu, L. T.; Brusa, M.; Comastri, A.; Miyaji, T.; Nandra, K.; Aird, J.; Paltani, S.

    2016-03-01

    The active galactic nuclei (AGN) X-ray luminosity function traces actively accreting supermassive black holes and is essential for the study of the properties of the AGN population, black hole evolution, and galaxy-black hole coevolution. Up to now, the AGN luminosity function has been estimated several times in soft (0.5-2 keV) and hard X-rays (2-10 keV). AGN selection in these energy ranges often suffers from identification and redshift incompleteness and, at the same time, photoelectric absorption can obscure a significant amount of the X-ray radiation. We estimate the evolution of the luminosity function in the 5-10 keV band, where we effectively avoid the absorbed part of the spectrum, rendering absorption corrections unnecessary up to NH ~ 1023 cm-2. Our dataset is a compilation of six wide, and deep fields: MAXI, HBSS, XMM-COSMOS, Lockman Hole, XMM-CDFS, AEGIS-XD, Chandra-COSMOS, and Chandra-CDFS. This extensive sample of ~1110 AGN (0.01 < z < 4.0, 41 < log Lx < 46) is 98% redshift complete with 68% spectroscopic redshifts. For sources lacking a spectroscopic redshift estimation we use the probability distribution function of photometric redshift estimation specifically tuned for AGN, and a flat probability distribution function for sources with no redshift information. We use Bayesian analysis to select the best parametric model from simple pure luminosity and pure density evolution to more complicated luminosity and density evolution and luminosity-dependent density evolution (LDDE). We estimate the model parameters that describe best our dataset separately for each survey and for the combined sample. We show that, according to Bayesian model selection, the preferred model for our dataset is the LDDE. Our estimation of the AGN luminosity function does not require any assumption on the AGN absorption and is in good agreement with previous works in the 2-10 keV energy band based on X-ray hardness ratios to model the absorption in AGN up to redshift three. Our sample does not show evidence of a rapid decline of the AGN luminosity function up to redshift four.

  18. Determination of optical properties in heterogeneous turbid media using a cylindrical diffusing fiber

    NASA Astrophysics Data System (ADS)

    Dimofte, Andreea; Finlay, Jarod C.; Liang, Xing; Zhu, Timothy C.

    2012-10-01

    For interstitial photodynamic therapy (PDT), cylindrical diffusing fibers (CDFs) are often used to deliver light. This study examines the feasibility and accuracy of using CDFs to characterize the absorption (μa) and reduced scattering (μ‧s) coefficients of heterogeneous turbid media. Measurements were performed in tissue-simulating phantoms with μa between 0.1 and 1 cm-1 and μ‧s between 3 and 10 cm-1 with CDFs 2 to 4 cm in length. Optical properties were determined by fitting the measured light fluence rate profiles at a fixed distance from the CDF axis using a heterogeneous kernel model in which the cylindrical diffusing fiber is treated as a series of point sources. The resulting optical properties were compared with independent measurement using a point source method. In a homogenous medium, we are able to determine the absorption coefficient μa using a value of μ‧s determined a priori (uniform fit) or μ‧s obtained by fitting (variable fit) with standard (maximum) deviations of 6% (18%) and 18% (44%), respectively. However, the CDF method is found to be insensitive to variations in μ‧s, thus requiring a complementary method such as using a point source for determination of μ‧s. The error for determining μa decreases in very heterogeneous turbid media because of the local absorption extremes. The data acquisition time for obtaining the one-dimensional optical properties distribution is less than 8 s. This method can result in dramatically improved accuracy of light fluence rate calculation for CDFs for prostate PDT in vivo when the same model and geometry is used for forward calculations using the extrapolated tissue optical properties.

  19. Levels of CDDs, CDFs, PCBs and Hg in Rural Soils of US (Project Overview)

    EPA Science Inventory

    No systematic survey of dioxins in soil has been conducted in the US. Soils represent the largest reservoir source of dioxins. As point source emissions are reduced emissions from soils become increasingly important. Understanding the distribution of dioxin levels in soils is ...

  20. CANDELS/GOODS-S, CDFS, and ECDFS: photometric redshifts for normal and X-ray-detected galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Li-Ting; Salvato, Mara; Nandra, Kirpal

    2014-11-20

    We present photometric redshifts and associated probability distributions for all detected sources in the Extended Chandra Deep Field South (ECDFS). This work makes use of the most up-to-date data from the Cosmic Assembly Near-IR Deep Legacy Survey (CANDELS) and the Taiwan ECDFS Near-Infrared Survey (TENIS) in addition to other data. We also revisit multi-wavelength counterparts for published X-ray sources from the 4 Ms CDFS and 250 ks ECDFS surveys, finding reliable counterparts for 1207 out of 1259 sources (∼96%). Data used for photometric redshifts include intermediate-band photometry deblended using the TFIT method, which is used for the first time inmore » this work. Photometric redshifts for X-ray source counterparts are based on a new library of active galactic nuclei/galaxy hybrid templates appropriate for the faint X-ray population in the CDFS. Photometric redshift accuracy for normal galaxies is 0.010 and for X-ray sources is 0.014 and outlier fractions are 4% and 5.2%, respectively. The results within the CANDELS coverage area are even better, as demonstrated both by spectroscopic comparison and by galaxy-pair statistics. Intermediate-band photometry, even if shallow, is valuable when combined with deep broadband photometry. For best accuracy, templates must include emission lines.« less

  1. Bivariate at-site frequency analysis of simulated flood peak-volume data using copulas

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Viglione, Alberto; Szolgay, Ján.; Blöschl, Günter; Bacigál, Tomáå.¡

    2010-05-01

    In frequency analysis of joint hydro-climatological extremes (flood peaks and volumes, low flows and durations, etc.), usually, bivariate distribution functions are fitted to the observed data in order to estimate the probability of their occurrence. Bivariate models, however, have a number of limitations; therefore, in the recent past, dependence models based on copulas have gained increased attention to represent the joint probabilities of hydrological characteristics. Regardless of whether standard or copula based bivariate frequency analysis is carried out, one is generally interested in the extremes corresponding to low probabilities of the fitted joint cumulative distribution functions (CDFs). However, usually there is not enough flood data in the right tail of the empirical CDFs to derive reliable statistical inferences on the behaviour of the extremes. Therefore, different techniques are used to extend the amount of information for the statistical inference, i.e., temporal extension methods that allow for making use of historical data or spatial extension methods such as regional approaches. In this study, a different approach was adopted which uses simulated flood data by rainfall-runoff modelling, to increase the amount of data in the right tail of the CDFs. In order to generate artificial runoff data (i.e. to simulate flood records of lengths of approximately 106 years), a two-step procedure was used. (i) First, the stochastic rainfall generator proposed by Sivapalan et al. (2005) was modified for our purpose. This model is based on the assumption of discrete rainfall events whose arrival times, durations, mean rainfall intensity and the within-storm intensity patterns are all random, and can be described by specified distributions. The mean storm rainfall intensity is disaggregated further to hourly intensity patterns. (ii) Secondly, the simulated rainfall data entered a semi-distributed conceptual rainfall-runoff model that consisted of a snow routine, a soil moisture routine and a flow routing routine (Parajka et al., 2007). The applicability of the proposed method was demonstrated on selected sites in Slovakia and Austria. The pairs of simulated flood volumes and flood peaks were analysed in terms of their dependence structure and different families of copulas (Archimedean, extreme value, Gumbel-Hougaard, etc.) were fitted to the observed and simulated data. The question to what extent measured data can be used to find the right copula was discussed. The study is supported by the Austrian Academy of Sciences and the Austrian-Slovak Co-operation in Science and Education "Aktion". Parajka, J., Merz, R., Blöschl, G., 2007: Uncertainty and multiple objective calibration in regional water balance modeling - Case study in 320 Austrian catchments. Hydrological Processes, 21, 435-446. Sivapalan, M., Blöschl, G., Merz, R., Gutknecht, D., 2005: Linking flood frequency to long-term water balance: incorporating effects of seasonality. Water Resources Research, 41, W06012, doi:10.1029/2004WR003439.

  2. UV-to-IR spectral energy distributions of galaxies at z>1: the impact of Herschel data on dust attenuation and star formation determinations

    NASA Astrophysics Data System (ADS)

    Buat, V.; Heinis, S.; Boquien, M.

    2013-11-01

    We report on our recent works on the UV-to-IR SED fitting of a sample of distant (z>1) galaxies observed by Herschel in the CDFS as part of the GOODS-Herschel project. Combining stellar and dust emission in galaxies is found powerful to constrain their dust attenuation as well as their star formation activity. We focus on the caracterisation of dust attenuation and on the uncertainties on the derivation of the star formation rates and stellar masses, as a function of the range of wavelengths sampled by the data data and of the assumptions made on the star formation histories

  3. PILOT LAND TREATMENT OF PAH-CONTAMINATED SEDIMENTS

    EPA Science Inventory

    Hazardous dredged sediments are typically placed in confined disposal facilities (CDFs) which are designed to dewater and contain but not treat sediments. Since navigational dredging in the U.S. is quickly filling many CDFs, these facilities have little available capacity for ne...

  4. LAND TREATMENT OF MILWAUKEE HARBOR SEDIMENTS CONTAMINATED WITH PAHS AND PCBS

    EPA Science Inventory

    Sediments dredged in the maintenance of navigation channels often contain concentrations of PCBs and PAHs that necessitate placement in confined disposal facilities (CDFs). For the Great Lakes especially, the majority of CDFs were constructed in the 1970s or early 1980s and have ...

  5. Reduction of Escherichia Coli using ceramic disk filter decorated by nano-TiO2: A low-cost solution for household water purification.

    PubMed

    He, Yuan; Huang, Guohe; An, Chunjiang; Huang, Jing; Zhang, Peng; Chen, Xiujuan; Xin, Xiaying

    2018-03-01

    Lack of access to safe water is a challenge in many developing countries, especially in rural areas. It is urgent to develop cost-effective water purification technologies to guarantee drinking water safety in these areas. The present study investigated the reduction of Escherichia coli (E. coli) using ceramic disk filters (CDFs) decorated by nano-TiO 2. The production of CDFs coated with nano-TiO 2 in terms of rice-husk ratio, rice-husk particle size, heating hold time and nano-TiO 2 mass fraction was optimized. The results show that the optimum conditions for CDFs with nano-TiO 2 coating included rice-husk ratio of 29.03%, rice-husk particle size of 0.28mm, heating hold time of 1.41h and nano-TiO 2 mass fraction of 2.21%. Additionally, the morphological and crystal phase characteristics of CDFs were revealed after the decoration by nano-TiO 2 . The effects of temperature, influent E. coli concentration, lamp power and their interactions were explored via factorial analysis. Influent E. coli concentration and lamp power had significant effects on E. coli removal efficiency. This study provided the solid theoretical support for understanding the production and bacteria inactivation relevant to CDFs impregnated with nano-TiO 2 . The results have important implications for finding a safe and cost-effective approach to solve drinking water problems in developing countries. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. X-ray observations of dust obscured galaxies in the Chandra deep field south

    NASA Astrophysics Data System (ADS)

    Corral, A.; Georgantopoulos, I.; Comastri, A.; Ranalli, P.; Akylas, A.; Salvato, M.; Lanzuisi, G.; Vignali, C.; Koutoulidis, L.

    2016-08-01

    We present the properties of X-ray detected dust obscured galaxies (DOGs) in the Chandra deep field south. In recent years, it has been proposed that a significant percentage of the elusive Compton-thick (CT) active galactic nuclei (AGN) could be hidden among DOGs. This type of galaxy is characterized by a very high infrared (IR) to optical flux ratio (f24 μm/fR > 1000), which in the case of CT AGN could be due to the suppression of AGN emission by absorption and its subsequent re-emission in the IR. The most reliable way of confirming the CT nature of an AGN is by X-ray spectroscopy. In a previous work, we presented the properties of X-ray detected DOGs by making use of the deepest X-ray observations available at that time, the 2Ms observations of the Chandra deep fields, the Chandra deep field north (CDF-N), and the Chandra deep field south (CDF-S). In that work, we only found a moderate percentage (<50%) of CT AGN among the DOGs sample. However, we pointed out that the limited photon statistics for most of the sources in the sample did not allow us to strongly constrain this number. In this paper, we further explore the properties of the sample of DOGs in the CDF-S presented in that work by using not only a deeper 6Ms Chandra survey of the CDF-S, but also by combining these data with the 3Ms XMM-Newton survey of the CDF-S. We also take advantage of the great coverage of the CDF-S region from the UV to the far-IR to fit the spectral energy distributions (SEDs) of our sources. Out of the 14 AGN composing our sample, 9 are highly absorbed (NH > 1023 cm-2), whereas 2 look unabsorbed, and the other 3 are only moderately absorbed. Among the highly absorbed AGN, we find that only three could be considered CT AGN. In only one of these three cases, we detect a strong Fe Kα emission line; the source is already classified as a CT AGN with Chandra data in a previous work. Here we confirm its CT nature by combining Chandra and XMM-Newton data. For the other two CT candidates, the non-detection of the line could be because of the low number of counts in their X-ray spectra, but their location in the L2-10 keV/L12 μm plot supports their CT classification. Although a higher number of CT sources could be hidden among the X-ray undetected DOGs, our results indicate that DOGs could be as well composed of only a fraction of CT AGN plus a number of moderate to highly absorbed AGN, as previously suggested. From our study of the X-ray undetected DOGs in the CDF-S, we estimate a percentage between 13 and 44% of CT AGN among the whole population of DOGs.

  7. Dioxin analysis by gas chromatography-Fourier transform ion cyclotron resonance mass spectrometry (GC-FTICRMS).

    PubMed

    Taguchi, Vince Y; Nieckarz, Robert J; Clement, Ray E; Krolik, Stefan; Williams, Robert

    2010-11-01

    The feasibility of utilizing a gas chromatograph-tandem quadrupole-Fourier transform ion cyclotron resonance mass spectrometer (GC-MS/MS-FTICRMS) to analyze chlorinated-dioxins/furans (CDDs/CDFs) and mixed halogenated dioxins/furans (HDDs/HDFs) was investigated by operating the system in the GC-FTICRMS mode. CDDs/CDFs and mixed HDDs/HDFs could be analyzed at 50,000 to 100,000 resolving power (RP) on the capillary gas chromatographic time scale. Initial experiments demonstrated that 1 pg of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and 5 pg of 2-bromo-3,7,8-trichlorodibenzo-p-dioxin (BTrCDD) could be detected. The feasibility of utilizing an FTICRMS for screening of CDDs/CDFs, HDDs/HDFs and related compounds was also investigated by analyzing an extract from vegetation exposed to fall-out from an industrial fire. CDDs/CDFs, chlorinated pyrenes and chlorinated tetracenes could be detected from a Kendrick plot analysis of the ultrahigh resolution mass spectra. Mass accuracies were of the order of 0.5 ppm on standards with external mass calibration and 1 ppm on a sample with internal mass calibration. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  8. Cosmic shear analysis of archival HST/ACS data. I. Comparison of early ACS pure parallel data to the HST/GEMS survey

    NASA Astrophysics Data System (ADS)

    Schrabback, T.; Erben, T.; Simon, P.; Miralles, J.-M.; Schneider, P.; Heymans, C.; Eifler, T.; Fosbury, R. A. E.; Freudling, W.; Hetterscheidt, M.; Hildebrandt, H.; Pirzkal, N.

    2007-06-01

    Context: This is the first paper of a series describing our measurement of weak lensing by large-scale structure, also termed “cosmic shear”, using archival observations from the Advanced Camera for Surveys (ACS) on board the Hubble Space Telescope (HST). Aims: In this work we present results from a pilot study testing the capabilities of the ACS for cosmic shear measurements with early parallel observations and presenting a re-analysis of HST/ACS data from the GEMS survey and the GOODS observations of the Chandra Deep Field South (CDFS). Methods: We describe the data reduction and, in particular, a new correction scheme for the time-dependent ACS point-spread-function (PSF) based on observations of stellar fields. This is currently the only technique which takes the full time variation of the PSF between individual ACS exposures into account. We estimate that our PSF correction scheme reduces the systematic contribution to the shear correlation functions due to PSF distortions to <2 × 10-6 for galaxy fields containing at least 10 stars, which corresponds to ⪉5% of the cosmological signal expected on scales of a single ACS field. Results: We perform a number of diagnostic tests indicating that the remaining level of systematics is consistent with zero for the GEMS and GOODS data confirming the success of our PSF correction scheme. For the parallel data we detect a low level of remaining systematics which we interpret to be caused by a lack of sufficient dithering of the data. Combining the shear estimate of the GEMS and GOODS observations using 96 galaxies arcmin-2 with the photometric redshift catalogue of the GOODS-MUSIC sample, we determine a local single field estimate for the mass power spectrum normalisation σ8, CDFS=0.52+0.11-0.15 (stat) ± 0.07(sys) (68% confidence assuming Gaussian cosmic variance) at a fixed matter density Ω_m=0.3 for a ΛCDM cosmology marginalising over the uncertainty of the Hubble parameter and the redshift distribution. We interpret this exceptionally low estimate to be due to a local under-density of the foreground structures in the CDFS. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archives at the Space Telescope European Coordinating Facility and the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  9. Journal Article: the National Dioxin Air Monitoring Network (Ndamn): Measurements of CDDs, CDFs and Coplanar PCBs at 15 Rural and 6 National Park Areas of the U.S.: June 1998-December 1999.

    EPA Science Inventory

    The U.S. EPA has established a National Dioxin Air Monitoring Network (NDAMN) to determine the temporal and geographical variability of atmospheric CDDs, CDFs and coplanar PCBs at rural and nonimpacted locations throughout the United States. Currently operating at 32 sampling st...

  10. Spatial Interpolation of Rain-field Dynamic Time-Space Evolution in Hong Kong

    NASA Astrophysics Data System (ADS)

    Liu, P.; Tung, Y. K.

    2017-12-01

    Accurate and reliable measurement and prediction of spatial and temporal distribution of rain-field over a wide range of scales are important topics in hydrologic investigations. In this study, geostatistical treatment of precipitation field is adopted. To estimate the rainfall intensity over a study domain with the sample values and the spatial structure from the radar data, the cumulative distribution functions (CDFs) at all unsampled locations were estimated. Indicator Kriging (IK) was used to estimate the exceedance probabilities for different pre-selected cutoff levels and a procedure was implemented for interpolating CDF values between the thresholds that were derived from the IK. Different interpolation schemes of the CDF were proposed and their influences on the performance were also investigated. The performance measures and visual comparison between the observed rain-field and the IK-based estimation suggested that the proposed method can provide fine results of estimation of indicator variables and is capable of producing realistic image.

  11. Journal Article: the National Dioxin Air Monitoring Network (Ndamn): Measurements of CDDs, CDFs, and Coplanar PCBs at 18 Rural, 8 National Parks, and 2 Suburban Areas of the U.S.: Results for the Year 2000.

    EPA Science Inventory

    In June, 1998, the U.S. EPA established the National Dioxin Air Monitoring Network (NDAMN). The primary goal of NDAMN is determine the temporal and geographical variability of atmospheric CDDs, CDFs, and coplanar PCBs at rural and nonimpacted locations throughout the United Stat...

  12. Perfluorinated compounds in fish from U.S. urban rivers and the Great Lakes.

    PubMed

    Stahl, Leanne L; Snyder, Blaine D; Olsen, Anthony R; Kincaid, Thomas M; Wathen, John B; McCarty, Harry B

    2014-11-15

    Perfluorinated compounds (PFCs) have recently received scientific and regulatory attention due to their broad environmental distribution, persistence, bioaccumulative potential, and toxicity. Studies suggest that fish consumption may be a source of human exposure to perfluorooctane sulfonate (PFOS) or long-chain perfluorocarboxylic acids. Most PFC fish tissue literature focuses on marine fish and waters outside of the United States (U.S.). To broaden assessments in U.S. fish, a characterization of PFCs in freshwater fish was initiated on a national scale using an unequal probability design during the U.S. Environmental Protection Agency's (EPA's) 2008-2009 National Rivers and Streams Assessment (NRSA) and the Great Lakes Human Health Fish Tissue Study component of the 2010 EPA National Coastal Condition Assessment (NCCA/GL). Fish were collected from randomly selected locations--164 urban river sites and 157 nearshore Great Lake sites. The probability design allowed extrapolation to the sampled population of 17,059 km in urban rivers and a nearshore area of 11,091 km(2) in the Great Lakes. Fillets were analyzed for 13 PFCs using high-performance liquid chromatography tandem mass spectrometry. Results showed that PFOS dominated in frequency of occurrence, followed by three other longer-chain PFCs (perfluorodecanoic acid, perfluoroundecanoic acid, and perfluorododecanoic acid). Maximum PFOS concentrations were 127 and 80 ng/g in urban river samples and Great Lakes samples, respectively. The range of NRSA PFOS detections was similar to literature accounts from targeted riverine fish sampling. NCCA/GL PFOS levels were lower than those reported by other Great Lakes researchers, but generally higher than values in targeted inland lake studies. The probability design allowed development of cumulative distribution functions (CDFs) to quantify PFOS concentrations versus the sampled population, and the application of fish consumption advisory guidance to the CDFs resulted in an estimation of the proportion of urban rivers and the Great Lakes that exceed human health protection thresholds. Copyright © 2014. Published by Elsevier B.V.

  13. Probabilistic Modeling of High-Temperature Material Properties of a 5-Harness 0/90 Sylramic Fiber/ CVI-SiC/ MI-SiC Woven Composite

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.; Tong, Michael; Murthy, P. L. N.; Mital, Subodh

    1998-01-01

    An integrated probabilistic approach has been developed to assess composites for high temperature applications. This approach was used to determine thermal and mechanical properties and their probabilistic distributions of a 5-harness 0/90 Sylramic fiber/CVI-SiC/Mi-SiC woven Ceramic Matrix Composite (CMC) at high temperatures. The purpose of developing this approach was to generate quantitative probabilistic information on this CMC to help complete the evaluation for its potential application for HSCT combustor liner. This approach quantified the influences of uncertainties inherent in constituent properties called primitive variables on selected key response variables of the CMC at 2200 F. The quantitative information is presented in the form of Cumulative Density Functions (CDFs). Probability Density Functions (PDFS) and primitive variable sensitivities on response. Results indicate that the scatters in response variables were reduced by 30-50% when the uncertainties in the primitive variables, which showed the most influence, were reduced by 50%.

  14. A statistical physics view of pitch fluctuations in the classical music from Bach to Chopin: evidence for scaling.

    PubMed

    Liu, Lu; Wei, Jianrong; Zhang, Huishu; Xin, Jianhong; Huang, Jiping

    2013-01-01

    Because classical music has greatly affected our life and culture in its long history, it has attracted extensive attention from researchers to understand laws behind it. Based on statistical physics, here we use a different method to investigate classical music, namely, by analyzing cumulative distribution functions (CDFs) and autocorrelation functions of pitch fluctuations in compositions. We analyze 1,876 compositions of five representative classical music composers across 164 years from Bach, to Mozart, to Beethoven, to Mendelsohn, and to Chopin. We report that the biggest pitch fluctuations of a composer gradually increase as time evolves from Bach time to Mendelsohn/Chopin time. In particular, for the compositions of a composer, the positive and negative tails of a CDF of pitch fluctuations are distributed not only in power laws (with the scale-free property), but also in symmetry (namely, the probability of a treble following a bass and that of a bass following a treble are basically the same for each composer). The power-law exponent decreases as time elapses. Further, we also calculate the autocorrelation function of the pitch fluctuation. The autocorrelation function shows a power-law distribution for each composer. Especially, the power-law exponents vary with the composers, indicating their different levels of long-range correlation of notes. This work not only suggests a way to understand and develop music from a viewpoint of statistical physics, but also enriches the realm of traditional statistical physics by analyzing music.

  15. Analysis of exposure to electromagnetic fields in a healthcare environment: simulation and experimental study.

    PubMed

    de Miguel-Bilbao, Silvia; Martín, Miguel Angel; Del Pozo, Alejandro; Febles, Victor; Hernández, José A; de Aldecoa, José C Fernández; Ramos, Victoria

    2013-11-01

    Recent advances in wireless technologies have lead to an increase in wireless instrumentation present in healthcare centers. This paper presents an analytical method for characterizing electric field (E-field) exposure within these environments. The E-field levels of the different wireless communications systems have been measured in two floors of the Canary University Hospital Consortium (CUHC). The electromagnetic (EM) conditions detected with the experimental measures have been estimated using the software EFC-400-Telecommunications (Narda Safety Test Solutions, Sandwiesenstrasse 7, 72793 Pfullingen, Germany). The experimental and simulated results are represented through 2D contour maps, and have been compared with the recommended safety and exposure thresholds. The maximum value obtained is much lower than the 3 V m(-1) that is established in the International Electrotechnical Commission Standard of Electromedical Devices. Results show a high correlation in terms of E-field cumulative distribution function (CDF) between the experimental and simulation results. In general, the CDFs of each pair of experimental and simulated samples follow a lognormal distribution with the same mean.

  16. Identification and compensation of friction for a novel two-axis differential micro-feed system

    NASA Astrophysics Data System (ADS)

    Du, Fuxin; Zhang, Mingyang; Wang, Zhaoguo; Yu, Chen; Feng, Xianying; Li, Peigang

    2018-06-01

    Non-linear friction in a conventional drive feed system (CDFS) feeding at low speed is one of the main factors that lead to the complexity of the feed drive. The CDFS will inevitably enter or approach a non-linear creeping work area at extremely low speed. A novel two-axis differential micro-feed system (TDMS) is developed in this paper to overcome the accuracy limitation of CDFS. A dynamic model of TDMS is first established. Then, a novel all-component friction parameter identification method (ACFPIM) using a genetic algorithm (GA) to identify the friction parameters of a TDMS is introduced. The friction parameters of the ball screw and linear motion guides are identified independently using the method, assuring the accurate modelling of friction force at all components. A proportional-derivate feed drive position controller with an observer-based friction compensator is implemented to achieve an accurate trajectory tracking performance. Finally, comparative experiments demonstrate the effectiveness of the TDMS in inhibiting the disadvantageous influence of non-linear friction and the validity of the proposed identification method for TDMS.

  17. Probability of Loss of Assured Safety in Systems with Multiple Time-Dependent Failure Modes: Incorporation of Delayed Link Failure in the Presence of Aleatory Uncertainty.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less

  18. The 4 Ms CHANDRA Deep Field-South Number Counts Apportioned by Source Class: Pervasive Active Galactic Nuclei and the Ascent of Normal Galaxies

    NASA Technical Reports Server (NTRS)

    Lehmer, Bret D.; Xue, Y. Q.; Brandt, W. N.; Alexander, D. M.; Bauer, F. E.; Brusa, M.; Comastri, A.; Gilli, R.; Hornschemeier, A. E.; Luo, B.; hide

    2012-01-01

    We present 0.5-2 keV, 2-8 keV, 4-8 keV, and 0.5-8 keV (hereafter soft, hard, ultra-hard, and full bands, respectively) cumulative and differential number-count (log N-log S ) measurements for the recently completed approx. equal to 4 Ms Chandra Deep Field-South (CDF-S) survey, the deepest X-ray survey to date. We implement a new Bayesian approach, which allows reliable calculation of number counts down to flux limits that are factors of approx. equal to 1.9-4.3 times fainter than the previously deepest number-count investigations. In the soft band (SB), the most sensitive bandpass in our analysis, the approx. equal to 4 Ms CDF-S reaches a maximum source density of approx. equal to 27,800 deg(sup -2). By virtue of the exquisite X-ray and multiwavelength data available in the CDF-S, we are able to measure the number counts from a variety of source populations (active galactic nuclei (AGNs), normal galaxies, and Galactic stars) and subpopulations (as a function of redshift, AGN absorption, luminosity, and galaxy morphology) and test models that describe their evolution. We find that AGNs still dominate the X-ray number counts down to the faintest flux levels for all bands and reach a limiting SB source density of approx. equal to 14,900 deg(sup -2), the highest reliable AGN source density measured at any wavelength. We find that the normal-galaxy counts rise rapidly near the flux limits and, at the limiting SB flux, reach source densities of approx. equal to 12,700 deg(sup -2) and make up 46% plus or minus 5% of the total number counts. The rapid rise of the galaxy counts toward faint fluxes, as well as significant normal-galaxy contributions to the overall number counts, indicates that normal galaxies will overtake AGNs just below the approx. equal to 4 Ms SB flux limit and will provide a numerically significant new X-ray source population in future surveys that reach below the approx. equal to 4 Ms sensitivity limit. We show that a future approx. equal to 10 Ms CDF-S would allow for a significant increase in X-ray-detected sources, with many of the new sources being cosmologically distant (z greater than or approx. equal to 0.6) normal galaxies.

  19. Mimic expert judgement through automated procedure for selecting rainfall events responsible for shallow landslide: A statistical approach to validation

    NASA Astrophysics Data System (ADS)

    Giovanna, Vessia; Luca, Pisano; Carmela, Vennari; Mauro, Rossi; Mario, Parise

    2016-01-01

    This paper proposes an automated method for the selection of rainfall data (duration, D, and cumulated, E), responsible for shallow landslide initiation. The method mimics an expert person identifying D and E from rainfall records through a manual procedure whose rules are applied according to her/his judgement. The comparison between the two methods is based on 300 D-E pairs drawn from temporal rainfall data series recorded in a 30 days time-lag before the landslide occurrence. Statistical tests, employed on D and E samples considered both paired and independent values to verify whether they belong to the same population, show that the automated procedure is able to replicate the expert pairs drawn by the expert judgment. Furthermore, a criterion based on cumulated distribution functions (CDFs) is proposed to select the most related D-E pairs to the expert one among the 6 drawn from the coded procedure for tracing the empirical rainfall threshold line.

  20. Black Hole growth and star formation activity in the CDFS

    NASA Astrophysics Data System (ADS)

    Brusa, Marcella; Fiore, Fabrizio

    2010-07-01

    We present a study of the properties of obscured Active Galactic Nuclei (AGN) detected in the CDFS 1Ms observation and their host galaxies. We limited the analysis to the MUSIC area, for which deep K-band observations obtained with ISAACatVLT are available, ensuring accurate identifications of the counterparts of the X-ray sources as well as reliable determination of photometric redshifts and galaxy parameters, such as stellar masses and star formation rates. Among other findings, we found that the X-ray selected AGN fraction increases with the stellar mass up to a value of 30% at z>1 and M*>3×1011 M.

  1. Gaussian process emulators for quantifying uncertainty in CO2 spreading predictions in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Tian, Liang; Wilkinson, Richard; Yang, Zhibing; Power, Henry; Fagerlund, Fritjof; Niemi, Auli

    2017-08-01

    We explore the use of Gaussian process emulators (GPE) in the numerical simulation of CO2 injection into a deep heterogeneous aquifer. The model domain is a two-dimensional, log-normally distributed stochastic permeability field. We first estimate the cumulative distribution functions (CDFs) of the CO2 breakthrough time and the total CO2 mass using a computationally expensive Monte Carlo (MC) simulation. We then show that we can accurately reproduce these CDF estimates with a GPE, using only a small fraction of the computational cost required by traditional MC simulation. In order to build a GPE that can predict the simulator output from a permeability field consisting of 1000s of values, we use a truncated Karhunen-Loève (K-L) expansion of the permeability field, which enables the application of the Bayesian functional regression approach. We perform a cross-validation exercise to give an insight of the optimization of the experiment design for selected scenarios: we find that it is sufficient to use 100s values for the size of training set and that it is adequate to use as few as 15 K-L components. Our work demonstrates that GPE with truncated K-L expansion can be effectively applied to uncertainty analysis associated with modelling of multiphase flow and transport processes in heterogeneous media.

  2. Effect of Cold-Drawn Fibers on the Self-Reinforcement of PP/LDPE Composites

    NASA Astrophysics Data System (ADS)

    Zhou, Ying-Guo; Su, Bei; Wu, Hai-Hong

    2017-08-01

    In our previous study, a method to fabricate super-ductile polypropylene/low-density polyethylene (PP/LDPE) blends was proposed, and a fiber-shape structure was shown to be formed, presenting necking propagation during tensile testing. In this study, the mechanical properties and thermal behavior of the necking region of tested super-ductile PP/LDPE samples were carefully investigated and further compared with the melt-stretched, untested, and thermo-mechanical-history-removed samples by differential scanning calorimetry and tensile testing. The results suggest that the tested samples have high mechanical properties and are more thermo-mechanically stable than the common PP/LDPE blends and melt-stretched samples. Additionally, to investigate their structure-property relationship, the necking region of the tested samples was further characterized by scanning electron microscopy and hot-stage polarized light microscopy. It can be concluded that the variation of the microstructure can be attributed to the cold-drawn fibers (CDFs), which were more stable thermally, formed during the tensile test. Furthermore, the CDFs were used for the filler in PP/LDPE blends. The experimental results of the different PP/LDPE composites indicate that the CDFs are a good reinforcement candidate and have the ability to improve the mechanical properties of the PP/LDPE blends.

  3. ESO imaging survey: infrared observations of CDF-S and HDF-S

    NASA Astrophysics Data System (ADS)

    Olsen, L. F.; Miralles, J.-M.; da Costa, L.; Benoist, C.; Vandame, B.; Rengelink, R.; Rité, C.; Scodeggio, M.; Slijkhuis, R.; Wicenec, A.; Zaggia, S.

    2006-06-01

    This paper presents infrared data obtained from observations carried out at the ESO 3.5 m New Technology Telescope (NTT) of the Hubble Deep Field South (HDF-S) and the Chandra Deep Field South (CDF-S). These data were taken as part of the ESO Imaging Survey (EIS) program, a public survey conducted by ESO to promote follow-up observations with the VLT. In the HDF-S field the infrared observations cover an area of ~53 square arcmin, encompassing the HST WFPC2 and STIS fields, in the JHKs passbands. The seeing measured in the final stacked images ranges from 0.79 arcsec to 1.22 arcsec and the median limiting magnitudes (AB system, 2'' aperture, 5σ detection limit) are J_AB˜23.0, H_AB˜22.8 and K_AB˜23.0 mag. Less complete data are also available in JKs for the adjacent HST NICMOS field. For CDF-S, the infrared observations cover a total area of ~100 square arcmin, reaching median limiting magnitudes (as defined above) of J_AB˜23.6 and K_AB˜22.7 mag. For one CDF-S field H band data are also available. This paper describes the observations and presents the results of new reductions carried out entirely through the un-supervised, high-throughput EIS Data Reduction System and its associated EIS/MVM C++-based image processing library developed, over the past 5 years, by the EIS project and now publicly available. The paper also presents source catalogs extracted from the final co-added images which are used to evaluate the scientific quality of the survey products, and hence the performance of the software. This is done comparing the results obtained in the present work with those obtained by other authors from independent data and/or reductions carried out with different software packages and techniques. The final science-grade catalogs together with the astrometrically and photometrically calibrated co-added images are available at CDS.

  4. Methods for interpreting change over time in patient-reported outcome measures.

    PubMed

    Wyrwich, K W; Norquist, J M; Lenderking, W R; Acaster, S

    2013-04-01

    Interpretation guidelines are needed for patient-reported outcome (PRO) measures' change scores to evaluate efficacy of an intervention and to communicate PRO results to regulators, patients, physicians, and providers. The 2009 Food and Drug Administration (FDA) Guidance for Industry Patient-Reported Outcomes (PRO) Measures: Use in Medical Product Development to Support Labeling Claims (hereafter referred to as the final FDA PRO Guidance) provides some recommendations for the interpretation of change in PRO scores as evidence of treatment efficacy. This article reviews the evolution of the methods and the terminology used to describe and aid in the communication of meaningful PRO change score thresholds. Anchor- and distribution-based methods have played important roles, and the FDA has recently stressed the importance of cross-sectional patient global assessments of concept as anchor-based methods for estimation of the responder definition, which describes an individual-level treatment benefit. The final FDA PRO Guidance proposes the cumulative distribution function (CDF) of responses as a useful method to depict the effect of treatments across the study population. While CDFs serve an important role, they should not be a replacement for the careful investigation of a PRO's relevant responder definition using anchor-based methods and providing stakeholders with a relevant threshold for the interpretation of change over time.

  5. A framework for semisupervised feature generation and its applications in biomedical literature mining.

    PubMed

    Li, Yanpeng; Hu, Xiaohua; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Feature representation is essential to machine learning and text mining. In this paper, we present a feature coupling generalization (FCG) framework for generating new features from unlabeled data. It selects two special types of features, i.e., example-distinguishing features (EDFs) and class-distinguishing features (CDFs) from original feature set, and then generalizes EDFs into higher-level features based on their coupling degrees with CDFs in unlabeled data. The advantage is: EDFs with extreme sparsity in labeled data can be enriched by their co-occurrences with CDFs in unlabeled data so that the performance of these low-frequency features can be greatly boosted and new information from unlabeled can be incorporated. We apply this approach to three tasks in biomedical literature mining: gene named entity recognition (NER), protein-protein interaction extraction (PPIE), and text classification (TC) for gene ontology (GO) annotation. New features are generated from over 20 GB unlabeled PubMed abstracts. The experimental results on BioCreative 2, AIMED corpus, and TREC 2005 Genomics Track show that 1) FCG can utilize well the sparse features ignored by supervised learning. 2) It improves the performance of supervised baselines by 7.8 percent, 5.0 percent, and 5.8 percent, respectively, in the tree tasks. 3) Our methods achieve 89.1, 64.5 F-score, and 60.1 normalized utility on the three benchmark data sets.

  6. Coeval Starburst and AGN Activity in the CDFS

    NASA Astrophysics Data System (ADS)

    Brusa, M.; Fiore, F.

    2009-10-01

    Here we present a study on the host galaxies properties of obscured Active Galactic Nuclei (AGN) detected in the CDFS 1Ms observation and for which deep K-band observations obtained with ISAAC@VLT are available. The aim of this study is to characterize the host galaxies properties of obscured AGN in terms of their stellar masses, star formation rates, and specific star formation rates. To this purpose we refined the X-ray/optical association of 179 1 Ms sources in the MUSIC area, using a three-bands (optical, K, and IRAC) catalog for the counterparts search and we derived the rest frame properties from SED fitting. We found that the host of obscured AGN at z>1 are associated with luminous, massive, red galaxies with significant star formation rates episodes still ongoing in about 50% of the sample.

  7. Determination of 2,3,7,8-chlorine-substituted dibenzo-p-dioxins and -furans at the part per trillion level in United States beef fat using high-resolution gas chromatography/high-resolution mass spectrometry

    NASA Technical Reports Server (NTRS)

    Ferrario, J.; Byrne, C.; McDaniel, D.; Dupuy, A. Jr; Harless, R.

    1996-01-01

    As part of the U.S. EPA Dioxin Reassessment Program, the 2,3,7,8-chlorine-substituted dibenzo-p-dioxins and furans were measured at part per trillion (ppt) levels in beef fat collected from slaughter facilities in the United States. This is the first statistically designed national survey of these compounds in the U.S. beef supply. Analyte concentrations were determined by high-resolution gas chromatography/high-resolution mass spectrometry, using isotope dilution methodology. Method limits of detection on a whole weight basis were 0.05 ppt for TCDD and 0.10 ppt for TCDF, 0.50 ppt for the pentas (PeCDDs/PeCDFs)/hexas (HxCDDs/HxCDFs)/heptas (HpCDDs/HpCDFs), and 3.00 ppt for the octas (OCDD/OCDF). Method detection and quantitation limits were established on the basis of demonstrated performance criteria utilizing fortified samples rather than by conventional signal-to-noise or variability of response methods. The background subtraction procedures developed for this study minimized the likelihood of false positives and increased the confidence associated with reported values near the detection limits. Mean and median values for each of the 2,3,7,8-Cl-substituted dioxins and furans are reported, along with the supporting information required for their interpretation. The mean toxic equivalence values for the samples are 0.35 ppt (nondetects = 0) and 0.89 ppt (nondetects = 1/2 LOD).

  8. The Stellar to Halo Mass Relation of X-ray Groups at 0.5

    NASA Astrophysics Data System (ADS)

    Patel, Shannon

    2014-08-01

    Combining the deepest X-ray imaging to date in the CDFS with the Carnegie-Spitzer-IMACS (CSI) spectroscopic redshift survey, we study the aggregate stellar mass content in bonafide low mass group halos (down to M_h~10^13 Msun) at 0.5

  9. VizieR Online Data Catalog: Multiwavelength photometry of CDFS X-ray sources (Brusa+, 2009)

    NASA Astrophysics Data System (ADS)

    Brusa, M.; Fiore, F.; Santini, P.; Grazian, A.; Comastri, A.; Zamorani, G.; Hasinger, G.; Merloni, A.; Civano, F.; Fontana, A.; Mainieri, V.

    2010-03-01

    The co-evolution of host galaxies and the active black holes which reside in their centre is one of the most important topics in modern observational cosmology. Here we present a study of the properties of obscured active galactic nuclei (AGN) detected in the CDFS 1 Ms observation and their host galaxies. We limited the analysis to the MUSIC area, for which deep K-band observations obtained with ISAAC@VLT are available, ensuring accurate identifications of the counterparts of the X-ray sources as well as reliable determination of photometric redshifts and galaxy parameters, such as stellar masses and star formation rates. In particular, we: 1) refined the X-ray/infrared/optical association of 179 sources in the MUSIC area detected in the Chandra observation; 2) studied the host galaxies observed and rest frame colors and properties. (2 data files).

  10. External beam radiotherapy with dose escalation in 1080 prostate cancer patients: definitive outcome and dose impact.

    PubMed

    Garibaldi, Elisabetta; Gabriele, Domenico; Maggio, Angelo; Delmastro, Elena; Garibaldi, Monica; Bresciani, Sara; Ortega, Cinzia; Stasi, Michele; Gabriele, Pietro

    2016-06-01

    The aim of this paper was to report definitive outcome of prostate cancer patients treated with dose escalation during a period of 12.5 years. From October 1999 to March 2012 we treated 1080 patients affected by prostate cancer, using External Beam Radiotherapy (EBRT). The mean age was 69.2 years. Most of the patients (69%) were staged as cT2, Gleason Score (GS)<7; the mean iPSA 18 ng/mL; the rate of clinical positive nodes was 1%. Our intention to treat was the following: for low risk patients 72 Gy; for intermediate risk patients 75.6 Gy and for high-very high risk patients 79.2 Gy in 1.8 Gy/day fractions. From 2008 we changed the fractionation scheme and the doses were the following: for low risk patients 74 Gy and for intermediate and high-very high risk patients 78 Gy in 2.0 Gy/day fractions. Whole pelvis irradiation was performed in high-very high risk patients with 43.2-50.4 Gy in 1.8 Gy per day. The mean follow-up was 81 months. For the whole population at 5 and 10 years, the prostate cancer specific overall survival (CSOS) was 96.7% and 92.2% respectively; the clinical disease free survival (CDFS) 88% and 77%; the biochemical disease free survival (BDFS) 75% and 58.5%. The 5 and 10 years CSOS was 98% and 96% respectively for low risk, 96% and 92% for intermediate risk and 89% and 82% for high-very high risk patients. In intermediate and high-very high risk groups at 5 and 10 years the CSOS was 95.2% and 89.2% respectively, the CDFS 84.5% and 70% and the BDFS 70% and 51% respectively. In high-very high risk patients at 5 and 10 years the CSOS were respectively 89% and 82% the CDFS was 78% and 61% and BDFS was 61% and 34%. In whole patient population the BDFS was related with the dose level (P=0.006) as well as the CDFS (P=0.003) with a cut off of 75.6 Gy. In the subgroup of intermediate plus high-very high risk patients the BDFS and the CDFS were dose-related with a cut off of 75.6 Gy (P=0.007 and P=0.0018 respectively). Finally, in the subgroup of high-very high risk patients we found that the CSOS, the BDFS and the CDFS were related to the dose level with a cut-off of 77.7 Gy (P=0.017; P=0.006 and P=0.038, respectively). Overall gastrointestinal (GI) acute and late G2 toxicities were respectively 5 % and 3.8%; GI acute and late >G3 toxicities were respectively 0.5% and 0.9%; acute and late >G2 genitourinary (GU) toxicities were respectively 10.5% and 2.6%; finally GU acute and late >G3 toxicities were respectively 0.6% and 0.5%. The dose escalation is not relevant for the outcome in low risk patients that can benefit from relatively moderate doses (72-74 Gy). For intermediate and high-very high risk patients the dose becomes significant to levels above 75.6 Gy; particularly in high-very high risk doses >77.7 Gy correlate with an improved outcome. Patients receiving dose >77.7 Gy presented a higher rate of overall GI and GU toxicity, but the number of grade >2 remains low. Our results, consolidated by a long follow-up, corroborate the literature data, confirming that 3D-CRT can allow a safe dose escalation without significantly increasing the severe toxicity.

  11. PHYTOREMEDIATING DREDGED SEDIMENTS: A BENEFICIAL REUSE PROTOCOL

    EPA Science Inventory

    The Jones Island Confined Disposal Facility (CDF) located in Milwaukee Harbor Wisconsin, receives dredged materials from normal maintenance of Milwaukee's waterways. Like many CDFs they face the dilemma of steady inputs and no feasible alternative for expansion. The Army Corps of...

  12. Assessment of human body influence on exposure measurements of electric field in indoor enclosures.

    PubMed

    de Miguel-Bilbao, Silvia; García, Jorge; Ramos, Victoria; Blas, Juan

    2015-02-01

    Personal exposure meters (PEMs) used for measuring exposure to electromagnetic fields (EMF) are typically used in epidemiological studies. As is well known, these measurement devices cause a perturbation of real EMF exposure levels due to the presence of the human body in the immediate proximity. This paper aims to model the alteration caused by the body shadow effect (BSE) in motion conditions and in indoor enclosures at the Wi-Fi frequency of 2.4 GHz. For this purpose, simulation techniques based on ray-tracing have been carried out, and their results have been verified experimentally. A good agreement exists between simulation and experimental results in terms of electric field (E-field) levels, and taking into account the cumulative distribution function (CDF) of the spatial distribution of amplitude. The Kolmogorov-Smirnov (KS) test provides a P-value greater than 0.05, in fact close to 1. It has been found that the influence of the presence of the human body can be characterized as an angle of shadow that depends on the dimensions of the indoor enclosure. The CDFs show that the E-field levels in indoor conditions follow a lognormal distribution in the absence of the human body and under the influence of BSE. In conclusion, the perturbation caused by BSE in PEMs readings cannot be compensated for by correction factors. Although the mean value is well adjusted, BSE causes changes in CDF that would require improvements in measurement protocols and in the design of measuring devices to subsequently avoid systematic errors. © 2014 Wiley Periodicals, Inc.

  13. U.S. EPA's National Dioxin Air Monitoring Network: Analytical Issues

    EPA Science Inventory

    The U.S. EPA has established a National Dioxin Air Monitoring Network (NDAMN) to determine the temporal and geographical variability of atmospheric chlorinated dibenzo-p-dioxins (CDDs), furans (CDFs), and coplanar polychlorinated biphenyls (PCBs) at rural and non-impacted locatio...

  14. Weak lensing Study in VOICE Survey I: Shear Measurement

    NASA Astrophysics Data System (ADS)

    Fu, Liping; Liu, Dezi; Radovich, Mario; Liu, Xiangkun; Pan, Chuzhong; Fan, Zuhui; Covone, Giovanni; Vaccari, Mattia; Amaro, Valeria; Brescia, Massimo; Capaccioli, Massimo; De Cicco, Demetra; Grado, Aniello; Limatola, Luca; Miller, Lance; Napolitano, Nicola R.; Paolillo, Maurizio; Pignata, Giuliano

    2018-06-01

    The VST Optical Imaging of the CDFS and ES1 Fields (VOICE) Survey is a Guaranteed Time program carried out with the ESO/VST telescope to provide deep optical imaging over two 4 deg2 patches of the sky centred on the CDFS and ES1 pointings. We present the cosmic shear measurement over the 4 deg2 covering the CDFS region in the r-band using LensFit. Each of the four tiles of 1 deg2 has more than one hundred exposures, of which more than 50 exposures passed a series of image quality selection criteria for weak lensing study. The 5σ limiting magnitude in r- band is 26.1 for point sources, which is ≳1 mag deeper than other weak lensing survey in the literature (e.g. the Kilo Degree Survey, KiDS, at VST). The photometric redshifts are estimated using the VOICE u, g, r, i together with near-infrared VIDEO data Y, J, H, Ks. The mean redshift of the shear catalogue is 0.87, considering the shear weight. The effective galaxy number density is 16.35 gal/arcmin2, which is nearly twice the one of KiDS. The performance of LensFit on such a deep dataset was calibrated using VOICE-like mock image simulations. Furthermore, we have analyzed the reliability of the shear catalogue by calculating the star-galaxy cross-correlations, the tomographic shear correlations of two redshift bins and the contaminations of the blended galaxies. As a further sanity check, we have constrained cosmological parameters by exploring the parameter space with Population Monte Carlo sampling. For a flat ΛCDM model we have obtained Σ _8 = σ _8(Ω _m/0.3)^{0.5} = 0.68^{+0.11}_{-0.15}.

  15. Prognostic factors in prostate cancer patients treated by radical external beam radiotherapy.

    PubMed

    Garibaldi, Elisabetta; Gabriele, Domenico; Maggio, Angelo; Delmastro, Elena; Garibaldi, Monica; Russo, Filippo; Bresciani, Sara; Stasi, Michele; Gabriele, Pietro

    2017-09-01

    The aim of this paper was to analyze, retrospectively, in prostate cancer patients treated in our Centre with external beam radiotherapy, the prognostic factors and their impact on the outcome in terms of cancer-specific survival (CSS), biochemical disease-free survival (BDFS) and clinical disease-free survival (CDFS). From October 1999 and March 2012, 1080 patients were treated with radiotherapy at our Institution: 87% of them were classified as ≤cT2, 83% had a Gleason Score (GS) ≤7, their mean of iPSA was 18 ng/mL, and the rate of clinical positive nodes was 1%. The mean follow-up was 81 months. The statistically significant prognostic factors for all groups of patients at both, univariate and multivariate analysis, were the GS and the iPSA. In intermediate- and high- or very-high-risk patients at multivariate analysis other prognostic factors for CSS were positive nodes on computed tomography (CT) scan and rectal preparation during the treatment; for BDFS, the prognostic factors were patient risk classification, positive lymph nodes on CT scan and rectal/bladder preparation; for CDFS, the prognostic factors were the number of positive core on biopsy (P=0.003), positive lymph nodes on CT scan, and radiotherapy (RT) dose. In high/very-high risk patient group at multivariate analysis other prognostic factors for CSS were clinical/radiological stage and RT dose, for BDFS they were adjuvant hormone therapy, clinical/radiological stage, and RT dose >77.7 Gy, and for CDFS they were clinical/radiological stage and RT dose >77.7 Gy. The results of this study confirm the prognostic factors described in the recent literature, with the addition of rectal/bladder preparation, generally known for its effect on toxicity but not yet on outcome.

  16. Radio Galaxy Zoo: Machine learning for radio source host galaxy cross-identification

    NASA Astrophysics Data System (ADS)

    Alger, M. J.; Banfield, J. K.; Ong, C. S.; Rudnick, L.; Wong, O. I.; Wolf, C.; Andernach, H.; Norris, R. P.; Shabala, S. S.

    2018-05-01

    We consider the problem of determining the host galaxies of radio sources by cross-identification. This has traditionally been done manually, which will be intractable for wide-area radio surveys like the Evolutionary Map of the Universe (EMU). Automated cross-identification will be critical for these future surveys, and machine learning may provide the tools to develop such methods. We apply a standard approach from computer vision to cross-identification, introducing one possible way of automating this problem, and explore the pros and cons of this approach. We apply our method to the 1.4 GHz Australian Telescope Large Area Survey (ATLAS) observations of the Chandra Deep Field South (CDFS) and the ESO Large Area ISO Survey South 1 (ELAIS-S1) fields by cross-identifying them with the Spitzer Wide-area Infrared Extragalactic (SWIRE) survey. We train our method with two sets of data: expert cross-identifications of CDFS from the initial ATLAS data release and crowdsourced cross-identifications of CDFS from Radio Galaxy Zoo. We found that a simple strategy of cross-identifying a radio component with the nearest galaxy performs comparably to our more complex methods, though our estimated best-case performance is near 100 per cent. ATLAS contains 87 complex radio sources that have been cross-identified by experts, so there are not enough complex examples to learn how to cross-identify them accurately. Much larger datasets are therefore required for training methods like ours. We also show that training our method on Radio Galaxy Zoo cross-identifications gives comparable results to training on expert cross-identifications, demonstrating the value of crowdsourced training data.

  17. Clusters, Groups, and Filaments in the Chandra Deep Field-South up to Redshift 1

    NASA Astrophysics Data System (ADS)

    Dehghan, S.; Johnston-Hollitt, M.

    2014-03-01

    We present a comprehensive structure detection analysis of the 0.3 deg2 area of the MUSYC-ACES field, which covers the Chandra Deep Field-South (CDFS). Using a density-based clustering algorithm on the MUSYC and ACES photometric and spectroscopic catalogs, we find 62 overdense regions up to redshifts of 1, including clusters, groups, and filaments. We also present the detection of a relatively small void of ~10 Mpc2 at z ~ 0.53. All structures are confirmed using the DBSCAN method, including the detection of nine structures previously reported in the literature. We present a catalog of all structures present, including their central position, mean redshift, velocity dispersions, and classification based on their morphological and spectroscopic distributions. In particular, we find 13 galaxy clusters and 6 large groups/small clusters. Comparison of these massive structures with published XMM-Newton imaging (where available) shows that 80% of these structures are associated with diffuse, soft-band (0.4-1 keV) X-ray emission, including 90% of all objects classified as clusters. The presence of soft-band X-ray emission in these massive structures (M 200 >= 4.9 × 1013 M ⊙) provides a strong independent confirmation of our methodology and classification scheme. In the closest two clusters identified (z < 0.13) high-quality optical imaging from the Deep2c field of the Garching-Bonn Deep Survey reveals the cD galaxies and demonstrates that they sit at the center of the detected X-ray emission. Nearly 60% of the clusters, groups, and filaments are detected in the known enhanced density regions of the CDFS at z ~= 0.13, 0.52, 0.68, and 0.73. Additionally, all of the clusters, bar the most distant, are found in these overdense redshift regions. Many of the clusters and groups exhibit signs of ongoing formation seen in their velocity distributions, position within the detected cosmic web, and in one case through the presence of tidally disrupted central galaxies exhibiting trails of stars. These results all provide strong support for hierarchical structure formation up to redshifts of 1.

  18. Citation analytics: Data exploration and comparative analyses of CiteScores of Open Access and Subscription-Based publications indexed in Scopus (2014-2016).

    PubMed

    Atayero, Aderemi A; Popoola, Segun I; Egeonu, Jesse; Oludayo, Olumuyiwa

    2018-08-01

    Citation is one of the important metrics that are used in measuring the relevance and the impact of research publications. The potentials of citation analytics may be exploited to understand the gains of publishing scholarly peer-reviewed research outputs in either Open Access (OA) sources or Subscription-Based (SB) sources in the bid to increase citation impact. However, relevant data required for such comparative analysis must be freely accessible for evidence-based findings and conclusions. In this data article, citation scores ( CiteScores ) of 2542 OA sources and 15,040 SB sources indexed in Scopus from 2014 to 2016 were presented and analyzed based on a set of five inclusion criteria. A robust dataset, which contains the CiteScores of OA and SB publication sources included, is attached as supplementary material to this data article to facilitate further reuse. Descriptive statistics and frequency distributions of OA CiteScores and SB CiteScores are presented in tables. Boxplot representations and scatter plots are provided to show the statistical distributions of OA CiteScores and SB CiteScores across the three sub-categories (Book Series, Journal, and Trade Journal). Correlation coefficient and p-value matrices are made available within the data article. In addition, Probability Density Functions (PDFs) and Cumulative Distribution Functions (CDFs) of OA CiteScores and SB CiteScores are computed and the results are presented using tables and graphs. Furthermore, Analysis of Variance (ANOVA) and multiple comparison post-hoc tests are conducted to understand the statistical difference (and its significance, if any) in the citation impact of OA publication sources and SB publication source based on CiteScore . In the long run, the data provided in this article will help policy makers and researchers in Higher Education Institutions (HEIs) to identify the appropriate publication source type and category for dissemination of scholarly research findings with maximum citation impact.

  19. Retrieving topsoil moisture using RADARSAT-2 data, a novel approach applied at the east of the Netherlands

    NASA Astrophysics Data System (ADS)

    Eweys, Omar Ali; Elwan, Abeer A.; Borham, Taha I.

    2017-12-01

    This manuscript proposes an approach for estimating soil moisture content over corn fields using C-band SAR data acquired by RADARSAT-2 satellite. An image based approach is employed to remove the vegetation contribution to the satellite signals. In particular, the absolute difference between like and cross polarized signals (ADLC) is employed for segmenting the canopy growth cycle into tiny stages. Each stage is represented by a Cumulative Distribution Function (CDF) of the like polarized signals. For periods of bare soils and vegetation cover, CDFs are compared and the vegetation contribution is quantified. The portion which represent the soil contributions (σHHsoil°) to the satellite signals; are employed for inversely running Oh model and the water cloud model for estimating soil moisture, canopy water content and canopy height respectively. The proposed approach shows satisfactory performance where high correlation of determination (R2) is detected between the field observations and the corresponding retrieved soil moisture, canopy water content and canopy height (R2 = 0.64, 0.97 and 0.98 respectively). Soil moisture retrieval is associated with root mean square error (RMSE) of 0.03 m3 m-3 while estimating canopy water content and canopy height have RMSE of 0.38 kg m-2 and 0.166 m respectively.

  20. A STATISTICAL SURVEY OF DIOXIN-LIKE COMPOUNDS IN U.S. BEEF: A PROGRESS REPORT

    EPA Science Inventory

    The USEPA and the USDA completed the first statistically designed survey of the occurrence and concentration of dibenzo-p-dioxins (CDDs), dibenzofurans (CDFs), and coplanar polychlorinated biphenyls (PCBs) in the fat of beef animals raised for human consumption in the United Stat...

  1. EDDINGTON RATIO DISTRIBUTION OF X-RAY-SELECTED BROAD-LINE AGNs AT 1.0 < z < 2.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suh, Hyewon; Hasinger, Günther; Steinhardt, Charles

    2015-12-20

    We investigate the Eddington ratio distribution of X-ray-selected broad-line active galactic nuclei (AGNs) in the redshift range 1.0 < z < 2.2, where the number density of AGNs peaks. Combining the optical and Subaru/Fiber Multi Object Spectrograph near-infrared spectroscopy, we estimate black hole masses for broad-line AGNs in the Chandra Deep Field South (CDF-S), Extended Chandra Deep Field South (E-CDF-S), and the XMM-Newton Lockman Hole (XMM-LH) surveys. AGNs with similar black hole masses show a broad range of AGN bolometric luminosities, which are calculated from X-ray luminosities, indicating that the accretion rate of black holes is widely distributed. We find a substantial fraction ofmore » massive black holes accreting significantly below the Eddington limit at z ≲ 2, in contrast to what is generally found for luminous AGNs at high redshift. Our analysis of observational selection biases indicates that the “AGN cosmic downsizing” phenomenon can be simply explained by the strong evolution of the comoving number density at the bright end of the AGN luminosity function, together with the corresponding selection effects. However, one might need to consider a correlation between the AGN luminosity and the accretion rate of black holes, in which luminous AGNs have higher Eddington ratios than low-luminosity AGNs, in order to understand the relatively small fraction of low-luminosity AGNs with high accretion rates in this epoch. Therefore, the observed downsizing trend could be interpreted as massive black holes with low accretion rates, which are relatively fainter than less-massive black holes with efficient accretion.« less

  2. Variability Selected Low-Luminosity Active Galactic Nuclei in the 4 Ms Chandra Deep Field-South

    NASA Technical Reports Server (NTRS)

    Young, M.; Brandt, W. N.; Xue, Y. Q.; Paolillo, D. M.; Alexander, F. E.; Bauer, F. E.; Lehmer, B. D.; Luo, B.; Shemmer, O.; Schneider, D. P.; hide

    2012-01-01

    The 4 Ms Chandra Deep Field-South (CDF-S) and other deep X-ray surveys have been highly effective at selecting active galactic nuclei (AGN). However, cosmologically distant low-luminosity AGN (LLAGN) have remained a challenge to identify due to significant contribution from the host galaxy. We identify long-term X ray variability (approx. month years, observed frame) in 20 of 92 CDF-S galaxies spanning redshifts approx equals 00.8 - 1.02 that do not meet other AGN selection criteria. We show that the observed variability cannot be explained by X-ray binary populations or ultraluminous X-ray sources, so the variability is most likely caused by accretion onto a supermassive black hole. The variable galaxies are not heavily obscured in general, with a stacked effective power-law photon index of Gamma(sub Stack) approx equals 1.93 +/- 0.13, and arc therefore likely LLAGN. The LLAGN tend to lie it factor of approx equal 6-89 below the extrapolated linear variability-luminosity relation measured for luminous AGN. This may he explained by their lower accretion rates. Variability-independent black-hole mass and accretion-rate estimates for variable galaxies show that they sample a significantly different black hole mass-accretion-rate space, with masses a factor of 2.4 lower and accretion rates a factor of 22.5 lower than variable luminous AGNs at the same redshift. We find that an empirical model based on a universal broken power-law power spectral density function, where the break frequency depends on SMBH mass and accretion rate, roughly reproduces the shape, but not the normalization, of the variability-luminosity trends measured for variable galaxies and more luminous AGNs.

  3. Feasibility of interstitial diffuse optical tomography using cylindrical diffusing fiber for prostate PDT

    PubMed Central

    Liang, Xing; Wang, Ken Kang-Hsin; Zhu, Timothy C.

    2013-01-01

    Interstitial diffuse optical tomography (DOT) has been used to characterize spatial distribution of optical properties for prostate photodynamic therapy (PDT) dosimetry. We have developed an interstitial DOT method using cylindrical diffuse fibers (CDFs) as light sources, so that the same light sources can be used for both DOT measurement and PDT treatment. In this novel interstitial CDF-DOT method, absolute light fluence per source strength (in unit of 1/cm2) is used to separate absorption and scattering coefficients. A mathematical phantom and a solid prostate phantom including anomalies with known optical properties were used, respectively, to test the feasibility of reconstructing optical properties using interstitial CDF-DOT. Three dimension spatial distributions of the optical properties were reconstructed for both scenarios. Our studies show that absorption coefficient can be reliably extrapolated while there are some cross talks between absorption and scattering properties. Even with the suboptimal reduced scattering coefficients, the reconstructed light fluence rate agreed with the measured values to within ±10%, thus the proposed CDF-DOT allows greatly improved light dosimetry calculation for interstitial PDT. PMID:23629149

  4. Genome-wide exploration of metal tolerance protein (MTP) genes in common wheat (Triticum aestivum): insights into metal homeostasis and biofortification.

    PubMed

    Vatansever, Recep; Filiz, Ertugrul; Eroglu, Seckin

    2017-04-01

    Metal transport process in plants is a determinant of quality and quantity of the harvest. Although it is among the most important of staple crops, knowledge about genes that encode for membrane-bound metal transporters is scarce in wheat. Metal tolerance proteins (MTPs) are involved in trace metal homeostasis at the sub-cellular level, usually by providing metal efflux out of the cytosol. Here, by using various bioinformatics approaches, genes that encode for MTPs in the hexaploid wheat genome (Triticum aestivum, abbreviated as Ta) were identified and characterized. Based on the comparison with known rice MTPs, the wheat genome contained 20 MTP sequences; named as TaMTP1-8A, B and D. All TaMTPs contained a cation diffusion facilitator (CDF) family domain and most members harbored a zinc transporter dimerization domain. Based on motif, phylogeny and alignment analysis, A, B and D genomes of TaMTP3-7 sequences demonstrated higher homology compared to TaMTP1, 2 and 8. With reference to their rice orthologs, TaMTP1s and TaMTP8s belonged to Zn-CDFs, TaMTP2s to Fe/Zn-CDFs and TaMTP3-7s to Mn-CDFs. Upstream regions of TaMTP genes included diverse cis-regulatory motifs, indicating regulation by developmental stage, tissue type and stresses. A scan of the coding sequences of 20 TaMTPs against published miRNAs predicted a total of 14 potential miRNAs, mainly targeting the members of most diverged groups. Expression analysis showed that several TaMTPs were temporally and spatially regulated during the developmental time-course. In grains, MTPs were preferentially expressed in the aleurone layer, which is known as a reservoir for high concentrations of iron and zinc. The work identified and characterized metal tolerance proteins in common wheat and revealed a potential involvement of MTPs in providing a sink for trace element storage in wheat grains.

  5. THE STELLAR MASS–HALO MASS RELATION FOR LOW-MASS X-RAY GROUPS AT 0.5< z< 1 IN THE CDFS WITH CSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Shannon G.; Kelson, Daniel D.; Williams, Rik J.

    2015-01-30

    Since z∼1, the stellar mass density locked in low-mass groups and clusters has grown by a factor of ∼8. Here, we make the first statistical measurements of the stellar mass content of low-mass X-ray groups at 0.5

  6. EVIDENCE OF FEED CONTAMINATION DUE TO SAMPLE HANDLING AND PREPARATION DURING A MASS BALANCE STUDY OF DIOXINS IN LACTATING COWS IN BACKGROUND CONDITIONS

    EPA Science Inventory

    In 1997, the United States (US) Environmental Protection Agency (EPA) conducted a mass balance study of polychlorinated dibenzo-p-dioxins (CDDs) and dibenzofurans (CDFs) in lactating cows in background conditions. The field portion of the study occurred at the US Department of A...

  7. NATIONAL AND REGIONAL AIR AND DEPOSITION MODELING OF STATIONARY AND MOBILE SOURCE EMISSIONS OF DIOXINS USING THE RELMAP MODELING SYSTEM

    EPA Science Inventory

    The purpose of this study is to estimate the atmospheric transport, fate and deposition flux of air releases of CDDs and CDFs from known sources within the continental United States using the Regional Lagrangian Model of Air Pollution (RELMAP). RELMAP is a Lagrangian air model th...

  8. A STATISTICAL SURVEY OF DIOXIN-LIKE COMPOUNDS IN ...

    EPA Pesticide Factsheets

    The USEPA and the USDA completed the first statistically designed survey of the occurrence and concentration of dibenzo-p-dioxins (CDDs), dibenzofurans (CDFs), and coplanar polychlorinated biphenyls (PCBs) in the fat of beef animals raised for human consumption in the United States. Back fat was sampled from 63 carcasses at federally inspected slaughter establishments nationwide. The sample design called for sampling beef animal classes in proportion to national annual slaughter statistics. All samples were analyzed using a modification of EPA method 1613, using isotope dilution, High Resolution GC/MS to determine the rate of occurrence of 2,3,7,8-substituted CDDs/CDFs/PCBs. The method detection limits ranged from 0.05 ng/kg for TCDD to 3 ng/kg for OCDD. The results of this survey showed a mean concentration (reported as I-TEQ, lipid adjusted) in U.S. beef animals of 0.35 ng/kg and 0.89 ng/kg for CDD/CDF TEQs when either non-detects are treated as 0 value or assigned a value of 1/2 the detection limit, respectively, and 0.51 ng/kg for coplanar PCB TEQs at both non-detect equal 0 and 1/2 detection limit. journal article

  9. Obscuration-dependent Evolution of Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Buchner, Johannes; Georgakakis, Antonis; Nandra, Kirpal; Brightman, Murray; Menzel, Marie-Luise; Liu, Zhu; Hsu, Li-Ting; Salvato, Mara; Rangel, Cyprian; Aird, James; Merloni, Andrea; Ross, Nicholas

    2015-04-01

    We aim to constrain the evolution of active galactic nuclei (AGNs) as a function of obscuration using an X-ray-selected sample of ~2000 AGNs from a multi-tiered survey including the CDFS, AEGIS-XD, COSMOS, and XMM-XXL fields. The spectra of individual X-ray sources are analyzed using a Bayesian methodology with a physically realistic model to infer the posterior distribution of the hydrogen column density and intrinsic X-ray luminosity. We develop a novel non-parametric method that allows us to robustly infer the distribution of the AGN population in X-ray luminosity, redshift, and obscuring column density, relying only on minimal smoothness assumptions. Our analysis properly incorporates uncertainties from low count spectra, photometric redshift measurements, association incompleteness, and the limited sample size. We find that obscured AGNs with N H > 1022 cm-2 account for {77}+4-5% of the number density and luminosity density of the accretion supermassive black hole population with L X > 1043 erg s-1, averaged over cosmic time. Compton-thick AGNs account for approximately half the number and luminosity density of the obscured population, and {38}+8-7% of the total. We also find evidence that the evolution is obscuration dependent, with the strongest evolution around N H ≈ 1023 cm-2. We highlight this by measuring the obscured fraction in Compton-thin AGNs, which increases toward z ~ 3, where it is 25% higher than the local value. In contrast, the fraction of Compton-thick AGNs is consistent with being constant at ≈35%, independent of redshift and accretion luminosity. We discuss our findings in the context of existing models and conclude that the observed evolution is, to first order, a side effect of anti-hierarchical growth.

  10. Journal Article: Average Method Blank Quantities of Dioxin-Like Congeners and Their Relationship to the Detection Limits of the U.S. EPA's National Dioxin Air Monitoring Network (Ndamn)

    EPA Science Inventory

    The U.S. EPA established a National Dioxin Air Monitoring Network (NDAMN) to determine the temporal and geographical variability of atmospheric CDDs, CDFs and coplanar PCBs throughout the United States. Currently operating at 33 stations, NDAMN has, as one of its tasks, the dete...

  11. The XMM deep survey in the CDF-S. X. X-ray variability of bright sources

    NASA Astrophysics Data System (ADS)

    Falocco, S.; Paolillo, M.; Comastri, A.; Carrera, F. J.; Ranalli, P.; Iwasawa, K.; Georgantopoulos, I.; Vignali, C.; Gilli, R.

    2017-12-01

    Aims: We aim to study the variability properties of bright hard X-ray selected active galactic nuclei (AGN) in the redshift range between 0.3 and 1.6 detected in the Chandra Deep Field South (XMM-CDFS) by a long ( 3 Ms) XMM observation. Methods: Taking advantage of the good count statistics in the XMM CDFS, we search for flux and spectral variability using the hardness ratio (HR) techniques. We also investigate the spectral variability of different spectral components (photon index of the power law, column density of the local absorber, and reflection intensity). The spectra were merged in six epochs (defined as adjacent observations) and in high and low flux states to understand whether the flux transitions are accompanied by spectral changes. Results: The flux variability is significant in all the sources investigated. The HRs in general are not as variable as the fluxes, in line with previous results on deep fields. Only one source displays a variable HR, anti-correlated with the flux (source 337). The spectral analysis in the available epochs confirms the steeper when brighter trend consistent with Comptonisation models only in this source at 99% confidence level. Finding this trend in one out of seven unabsorbed sources is consistent, within the statistical limits, with the 15% of unabsorbed AGN in previous deep surveys. No significant variability in the column densities, nor in the Compton reflection component, has been detected across the epochs considered. The high and low states display in general different normalisations but consistent spectral properties. Conclusions: X-ray flux fluctuations are ubiquitous in AGN, though in some cases the data quality does not allow for their detection. In general, the significant flux variations are not associated with spectral variability: photon index and column densities are not significantly variable in nine out of the ten AGN over long timescales (from three to six and a half years). Photon index variability is found only in one source (which is steeper when brighter) out of seven unabsorbed AGN. The percentage of spectrally variable objects is consistent, within the limited statistics of sources studied here, with previous deep samples.

  12. Probablistic Analyses of Waste Package Quantities Impacted by Potential Igneous Disruption at Yucca Mountain

    NASA Astrophysics Data System (ADS)

    Wallace, M. G.; Iuzzolina, H.

    2005-12-01

    A probabilistic analysis was conducted to estimate ranges for the numbers of waste packages that could be damaged in a potential future igneous event through a repository at Yucca Mountain. The analysis includes disruption from an intrusive igneous event and from an extrusive volcanic event. This analysis supports the evaluation of the potential consequences of future igneous activity as part of the total system performance assessment for the license application for the Yucca Mountain Project (YMP). The first scenario, igneous intrusion, investigated the case where one or more igneous dikes intersect the repository. A swarm of dikes was characterized by distributions of length, width, azimuth, and number of dikes and the spacings between them. Through the use in part of a latin hypercube simulator and a modified video game engine, mathematical relationships were built between those parameters and the number of waste packages hit. Corresponding cumulative distribution function curves (CDFs) for the number of waste packages hit under several different scenarios were calculated. Variations in dike thickness ranges, as well as in repository magma bulkhead positions were examined through sensitivity studies. It was assumed that all waste packages in an emplacement drift would be impacted if that drift was intersected by a dike. Over 10,000 individual simulations were performed. Based on these calculations, out of a total of over 11,000 planned waste packages distributed over an area of approximately 5.5 km2 , the median number of waste packages impacted was roughly 1/10 of the total. Individual cases ranged from 0 waste packages to the entire inventory being impacted. The igneous intrusion analysis involved an explicit characterization of dike-drift intersections, built upon various distributions that reflect the uncertainties associated with the inputs. The second igneous scenario, volcanic eruption (eruptive conduits), considered the effects of conduits formed in association with a volcanic eruption through the repository. Mathematical relations were built between the resulting conduit areas and the fraction of the repository area occupied by waste packages. This relation was used in conjunction with a joint distribution incorporating variability in eruptive conduit diameters and in the number of eruptive conduits that could intersect the repository.

  13. Journal Article: Atmospheric Measurements of CDDs, CDFs, and Coplanar PCBs in Rural and Remote Locations of the U.S. for the Years 1998-2001 from the National Dioxin Air Monitoring Network (Ndamn)

    EPA Science Inventory

    The U.S. EPA established a National Dioxin Air Monitoring Network (NDAMN) to determine background air concentrations of PCDDs, PCDFs, and cp-PCBs in rural and remote areas of the United States. Background is defined as average ambient air concentrations inferred from long-term a...

  14. Searching for faint AGN in the CDFS: an X-ray (Chandra) vs optical variability (HST) comparison.

    NASA Astrophysics Data System (ADS)

    Georgantopoulos, I.; Pouliasis, E.; Bonanos, A.; Sokolovsky, K.; Yang, M.; Hatzidimitriou, D.; Bellas, I.; Gavras, P.; Spetsieri, Z.

    2017-10-01

    X-ray surveys are believed to be the most efficient way to detect AGN. Recently though, optical variability studies are claimed to probe even fainter AGN. We are presenting results from an HST study aimed to identify Active Galactic Nuclei (AGN) through optical variability selection in the CDFS.. This work is part of the 'Hubble Catalogue of Variables'project of ESA that aims to identify variable sources in the Hubble Source Catalogue.' In particular, we used Hubble Space Telescope (HST) z-band images taken over 5 epochs and performed aperture photometry to derive the lightcurves of the sources. Two statistical methods (standard deviation & interquartile range) resulting in a final sample of 175 variable AGN candidates, having removed the artifacts by visual inspection and known stars and supernovae. The fact that the majority of the sources are extended and variable indicates AGN activity. We compare the efficiency of the method by comparing with the 7Ms Chandra detections. Our work shows that the optical variability probes AGN at comparable redshifts but at deeper optical magnitudes. Our candidate AGN (non detected in X-rays) have luminosities of L_x<6×10^{40} erg/sec at z˜0.7 suggesting that these are associated with low luminosity Seyferts and LINERS.

  15. A TIME-TRENDS STUDY OF THE OCCURRENCES AND ...

    EPA Pesticide Factsheets

    Polychlorinated dibenzo-p-dioxins (CDDs), polychlorinated dibenzofurans (CDFs) and certain non- and mono-ortho substituted polychlorinated biphenyls (cp-PCBs) are a general class of chlorinated aromatic compounds that are considered as dioxin-like. Because these chemicals are highly toxic, are resistant to physical, chemical and biological degradation and transformation processes, are highly lipophilic and bioaccumulate into ecological and agricultural food chains, attention has been directed to the identification of anthropogenic source activities with the objective of reducing the overall environmental burden. In this regard, certain fundamental questions arise as to environmental trends over time in terms of environmental concentrations and fluxes to environmental sinks. When did these chemicals initially appear in the general environment and are they related to anthropogenic activities? What has been the chronology of environmental burden from the recent time to decades in the past in terms of environmental concentrations and fluxes to the sink? Is there evidence of any trends in environmental burden with time? To address these fundamental questions, the United States Environmental Protection Agency (USEPA) in collaboration with the United States Department of Energy (USDOE) has completed a time-trends study of the occurrences and levels of CDDs, CDFs and cp-PCBs in the U.S. environment using dateable sediment deposits obtained from 11 freshwater lake

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatman, Caroline M. S.; Labbé, Ivo; Van Houdt, Josha

    The FourStar galaxy evolution survey (ZFOURGE) is a 45 night legacy program with the FourStar near-infrared camera on Magellan and one of the most sensitive surveys to date. ZFOURGE covers a total of 400 arcmin{sup 2} in cosmic fields CDFS, COSMOS and UDS, overlapping CANDELS. We present photometric catalogs comprising >70,000 galaxies, selected from ultradeep K {sub s} -band detection images (25.5–26.5 AB mag, 5 σ , total), and >80% complete to K {sub s} < 25.3–25.9 AB. We use 5 near-IR medium-bandwidth filters ( J {sub 1}, J {sub 2}, J {sub 3}, H {sub s} , H {submore » l} ) as well as broad-band K {sub s} at 1.05–2.16 μ m to 25–26 AB at a seeing of ∼0.″5. Each field has ancillary imaging in 26–40 filters at 0.3–8 μ m. We derive photometric redshifts and stellar population properties. Comparing with spectroscopic redshifts indicates a photometric redshift uncertainty σ {sub z} = 0.010, 0.009, and 0.011 in CDFS, COSMOS, and UDS. As spectroscopic samples are often biased toward bright and blue sources, we also inspect the photometric redshift differences between close pairs of galaxies, finding σ {sub z} {sub ,pairs} = 0.01–0.02 at 1 < z < 2.5. We quantify how σ {sub z} {sub ,pairs} depends on redshift, magnitude, spectral energy distribution type, and the inclusion of FourStar medium bands. σ {sub z} {sub ,pairs} is smallest for bright, blue star-forming samples, while red star-forming galaxies have the worst σ {sub z} {sub ,pairs}. Including FourStar medium bands reduces σ {sub z} {sub ,pairs} by 50% at 1.5 < z < 2.5. We calculate star formation rates (SFRs) based on ultraviolet and ultradeep far-IR Spitzer /MIPS and Herschel /PACS data. We derive rest-frame U − V and V − J colors, and illustrate how these correlate with specific SFR and dust emission to z = 3.5. We confirm the existence of quiescent galaxies at z ∼ 3, demonstrating their SFRs are suppressed by > ×15.« less

  17. Smap Soil Moisture Data Assimilation for the Continental United States and Eastern Africa

    NASA Astrophysics Data System (ADS)

    Blankenship, C. B.; Case, J.; Zavodsky, B.; Crosson, W. L.

    2016-12-01

    The NASA Short-Term Prediction Research and Transition (SPoRT) Center at Marshall Space Flight Center manages near-real-time runs of the Noah Land Surface Model within the NASA Land Information System (LIS) over Continental U.S. (CONUS) and Eastern Africa domains. Soil moisture products from the CONUS model run are used by several NOAA/National Weather Service Weather Forecast Offices for flood and drought situational awareness. The baseline LIS configuration is the Noah model driven by atmospheric and combined radar/gauge precipitation analyses, and input satellite-derived real-time green vegetation fraction on a 3-km grid for the CONUS. This configuration is being enhanced by adding the assimilation of Level 2 Soil Moisture Active/Passive (SMAP) soil moisture retrievals in a parallel run beginning on 1 April 2015. Our implementation of SMAP assimilation includes a cumulative distribution function (CDF) matching approach that aggregates points with similar soil types. This method allows creation of robust CDFs with a short data record, and also permits the correction of local anomalies that may arise from poor forcing data (e.g., quality-control problems with rain gauges). Validation results using in situ soil monitoring networks in the CONUS are shown, with comparisons to the baseline SPoRT-LIS run. Initial results are also presented from a modeling run in eastern Africa, forced by Integrated Multi-satellitE Retrievals for GPM (IMERG) precipitation data. Strategies for spatial downscaling and for dealing with effective depth of the retrieval product are also discussed.

  18. Long-Term Effects of Dredging Operations Program. Collation and Interpretation of Data for Times Beach Confined Disposal Facility, Buffalo, New York

    DTIC Science & Technology

    1991-06-01

    commercial products . The D-series of reports includes publications of the Environmental Effects of Dredging Programs: Dredging Operations Technical Support...insufficient data are available, areas for future productive research are recommended. The major amount of information available is for the upland area, where...Conse- quently, the upland, wetland, and aquatic areas that appear either as an end product or transiently at all CDFs are permanently established

  19. Manufactured Soil Field Demonstration for Constructing Wetlands to Treat Acid Mine Drainage on Abandoned Minelands

    DTIC Science & Technology

    2007-11-01

    reclamation and reuse of dredged material from existing CDFs. CRDAs have been established with Recycled Soil Manufacturing Technology (RSMT), N-Viro...existing coal mine bony residual waste. Sawdust was loaded into a manure spreader with a front-end loader and spread across the bony residual in...manure spreader and then dredged material was placed on top of the paper fiber. Both 5 ERDC TN-DOER-D9 November 2007 materials were then spread

  20. Assimilation of SMOS Retrievals in the Land Information System

    NASA Technical Reports Server (NTRS)

    Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.

    2016-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm(sub 3 cm(sub -3). These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve.

  1. Assimilation of SMOS Retrievals in the Land Information System

    PubMed Central

    Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.

    2018-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm3 cm−3. These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve. PMID:29367795

  2. The FourStar Galaxy Evolution Survey (ZFOURGE): Ultraviolet to Far-infrared Catalogs, Medium-bandwidth Photometric Redshifts with Improved Accuracy, Stellar Masses, and Confirmation of Quiescent Galaxies to z ˜ 3.5

    NASA Astrophysics Data System (ADS)

    Straatman, Caroline M. S.; Spitler, Lee R.; Quadri, Ryan F.; Labbé, Ivo; Glazebrook, Karl; Persson, S. Eric; Papovich, Casey; Tran, Kim-Vy H.; Brammer, Gabriel B.; Cowley, Michael; Tomczak, Adam; Nanayakkara, Themiya; Alcorn, Leo; Allen, Rebecca; Broussard, Adam; van Dokkum, Pieter; Forrest, Ben; van Houdt, Josha; Kacprzak, Glenn G.; Kawinwanichakij, Lalitwadee; Kelson, Daniel D.; Lee, Janice; McCarthy, Patrick J.; Mehrtens, Nicola; Monson, Andrew; Murphy, David; Rees, Glen; Tilvi, Vithal; Whitaker, Katherine E.

    2016-10-01

    The FourStar galaxy evolution survey (ZFOURGE) is a 45 night legacy program with the FourStar near-infrared camera on Magellan and one of the most sensitive surveys to date. ZFOURGE covers a total of 400 arcmin2 in cosmic fields CDFS, COSMOS and UDS, overlapping CANDELS. We present photometric catalogs comprising >70,000 galaxies, selected from ultradeep K s -band detection images (25.5-26.5 AB mag, 5σ, total), and >80% complete to K s < 25.3-25.9 AB. We use 5 near-IR medium-bandwidth filters (J 1, J 2, J 3, H s , H l ) as well as broad-band K s at 1.05-2.16 μm to 25-26 AB at a seeing of ˜0.″5. Each field has ancillary imaging in 26-40 filters at 0.3-8 μm. We derive photometric redshifts and stellar population properties. Comparing with spectroscopic redshifts indicates a photometric redshift uncertainty σ z = 0.010, 0.009, and 0.011 in CDFS, COSMOS, and UDS. As spectroscopic samples are often biased toward bright and blue sources, we also inspect the photometric redshift differences between close pairs of galaxies, finding σ z,pairs = 0.01-0.02 at 1 < z < 2.5. We quantify how σ z,pairs depends on redshift, magnitude, spectral energy distribution type, and the inclusion of FourStar medium bands. σ z,pairs is smallest for bright, blue star-forming samples, while red star-forming galaxies have the worst σ z,pairs. Including FourStar medium bands reduces σ z,pairs by 50% at 1.5 < z < 2.5. We calculate star formation rates (SFRs) based on ultraviolet and ultradeep far-IR Spitzer/MIPS and Herschel/PACS data. We derive rest-frame U - V and V - J colors, and illustrate how these correlate with specific SFR and dust emission to z = 3.5. We confirm the existence of quiescent galaxies at z ˜ 3, demonstrating their SFRs are suppressed by > ×15. This paper contains data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas observatory, Chile

  3. VizieR Online Data Catalog: 16yrs of AGNs X-ray spectral analyses from 7Ms CDF-S (Liu+, 2017)

    NASA Astrophysics Data System (ADS)

    Liu, T.; Tozzi, P.; Wang, J.-X.; Brandt, W. N.; Vignali, C.; Xue, Y.; Schneider, D. P.; Comastri, A.; Yang, G.; Bauer, F. E.; Paolillo, M.; Luo, B.; Gilli, R.; Wang, Q. D.; Giavalisco, M.; Ji, Z.; Alexander, D. M.; Mainieri, V.; Shemmer, O.; Koekemoer, A.; Risaliti, G.

    2017-09-01

    The 7Ms CDF-S survey is comprised of observations performed between 1999 October 14, and 2016 March 24, (UTC). Excluding one observation compromised by telemetry saturation and other issues (ObsID 581), there are 102 observations (observation IDs listed in Table 1) in the data set. The exposures collected across 16 years can be grouped into four distinct periods, each spanning 2-21 months. Table 1: --------------------------------------------------------------------------- Period Observation Date Time Span Exposure Time --------------------------------------------------------------------------- I 1999.10-2000.12 14 months 1Ms 11 ObsIDs: 1431-0 1431-1 441 582 2406 2405 2312 1672 2409 2313 2239 II 2007.09-2007.11 2 months 1Ms 12 ObsIDs: 8591 9593 9718 8593 8597 8595 8592 8596 9575 9578 8594 9596 III 2010.03-2010.07 4 months 2Ms 31 ObsIDs: 12043 12123 12044 12128 12045 12129 12135 12046 12047 12137 12138 12055 12213 12048 12049 12050 12222 12219 12051 12218 12223 12052 12220 12053 12054 12230 12231 12227 12233 12232 12234 IV 2014.06-2016.03 21 months 3Ms 48 ObsIDs: 16183 16180 16456 16641 16457 16644 16463 17417 17416 16454 16176 16175 16178 16177 16620 16462 17535 17542 16184 16182 16181 17546 16186 16187 16188 16450 16190 16189 17556 16179 17573 17633 17634 16453 16451 16461 16191 16460 16459 17552 16455 16458 17677 18709 18719 16452 18730 16185 --------------------------------------------------------------------------- (4 data files).

  4. Simulating Pre-Asymptotic, Non-Fickian Transport Although Doing Simple Random Walks - Supported By Empirical Pore-Scale Velocity Distributions and Memory Effects

    NASA Astrophysics Data System (ADS)

    Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.

    2016-12-01

    Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.

  5. Environmental Effects of Dredging Program: Inland Waterways: Proceedings of a National Workshop on the Beneficial Uses of Dredged Material Held in St. Paul, Minnesota on 27-30 October 1987

    DTIC Science & Technology

    1988-11-01

    addition, I leave with the knowledge that the cost-sharing concept is a viable option which will result in many navigation improvements finally getting...two recorded incidences to date in CE disposal CDFs in the Great Lakes. To my knowledge , no outbreaks have occurred elsewhere. Changing habitat types...water- skiers , and other recreation seekers. That right- of-way in a naturally occurring, multiple-use resource, is in contrast to a rail or highway

  6. The antiphasic regulatory module comprising CDF5 and its antisense RNA FLORE links the circadian clock to photoperiodic flowering.

    PubMed

    Henriques, Rossana; Wang, Huan; Liu, Jun; Boix, Marc; Huang, Li-Fang; Chua, Nam-Hai

    2017-11-01

    Circadian rhythms of gene expression are generated by the combinatorial action of transcriptional and translational feedback loops as well as chromatin remodelling events. Recently, long noncoding RNAs (lncRNAs) that are natural antisense transcripts (NATs) to transcripts encoding central oscillator components were proposed as modulators of core clock function in mammals (Per) and fungi (frq/qrf). Although oscillating lncRNAs exist in plants, their functional characterization is at an initial stage. By screening an Arabidopsis thaliana lncRNA custom-made array we identified CDF5 LONG NONCODING RNA (FLORE), a circadian-regulated lncRNA that is a NAT of CDF5. Quantitative real-time RT-PCR confirmed the circadian regulation of FLORE, whereas GUS-staining and flowering time evaluation were used to determine its biological function. FLORE and CDF5 antiphasic expression reflects mutual inhibition in a similar way to frq/qrf. Moreover, whereas the CDF5 protein delays flowering by directly repressing FT transcription, FLORE promotes it by repressing several CDFs (CDF1, CDF3, CDF5) and increasing FT transcript levels, indicating both cis and trans function. We propose that the CDF5/FLORE NAT pair constitutes an additional circadian regulatory module with conserved (mutual inhibition) and unique (function in trans) features, able to fine-tune its own circadian oscillation, and consequently, adjust the onset of flowering to favourable environmental conditions. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  7. Copula-based assessment of the relationship between food peaks and flood volumes using information on historical floods by Bayesian Monte Carlo Markov Chain simulations

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Szolgay, Ján.; Bacigál, Tomáå.¡; Kohnová, Silvia

    2010-05-01

    Copula-based estimation methods of hydro-climatological extremes have increasingly been gaining attention of researchers and practitioners in the last couple of years. Unlike the traditional estimation methods which are based on bivariate cumulative distribution functions (CDFs), copulas are a relatively flexible tool of statistics that allow for modelling dependencies between two or more variables such as flood peaks and flood volumes without making strict assumptions on the marginal distributions. The dependence structure and the reliability of the joint estimates of hydro-climatological extremes, mainly in the right tail of the joint CDF not only depends on the particular copula adopted but also on the data available for the estimation of the marginal distributions of the individual variables. Generally, data samples for frequency modelling have limited temporal extent, which is a considerable drawback of frequency analyses in practice. Therefore, it is advised to deal with statistical methods that improve any part of the process of copula construction and result in more reliable design values of hydrological variables. The scarcity of the data sample mostly in the extreme tail of the joint CDF can be bypassed, e.g., by using a considerably larger amount of simulated data by rainfall-runoff analysis or by including historical information on the variables under study. The latter approach of data extension is used here to make the quantile estimates of the individual marginals of the copula more reliable. In the presented paper it is proposed to use historical information in the frequency analysis of the marginal distributions in the framework of Bayesian Monte Carlo Markov Chain (MCMC) simulations. Generally, a Bayesian approach allows for a straightforward combination of different sources of information on floods (e.g. flood data from systematic measurements and historical flood records, respectively) in terms of a product of the corresponding likelihood functions. On the other hand, the MCMC algorithm is a numerical approach for sampling from the likelihood distributions. The Bayesian MCMC methods therefore provide an attractive way to estimate the uncertainty in parameters and quantile metrics of frequency distributions. The applicability of the method is demonstrated in a case study of the hydroelectric power station Orlík on the Vltava River. This site has a key role in the flood prevention of Prague, the capital city of the Czech Republic. The record length of the available flood data is 126 years from the period 1877-2002, while the flood event observed in 2002 that caused extensive damages and numerous casualties is treated as a historic one. To estimate the joint probabilities of flood peaks and volumes, different copulas are fitted and their goodness-of-fit are evaluated by bootstrap simulations. Finally, selected quantiles of flood volumes conditioned on given flood peaks are derived and compared with those obtained by the traditional method used in the practice of water management specialists of the Vltava River.

  8. Impact of Soil Moisture Assimilation on Land Surface Model Spin-Up and Coupled LandAtmosphere Prediction

    NASA Technical Reports Server (NTRS)

    Santanello, Joseph A., Jr.; Kumar, Sujay V.; Peters-Lidard, Christa D.; Lawston, P.

    2016-01-01

    Advances in satellite monitoring of the terrestrial water cycle have led to a concerted effort to assimilate soil moisture observations from various platforms into offline land surface models (LSMs). One principal but still open question is that of the ability of land data assimilation (LDA) to improve LSM initial conditions for coupled short-term weather prediction. In this study, the impact of assimilating Advanced Microwave Scanning Radiometer for EOS (AMSR-E) soil moisture retrievals on coupled WRF Model forecasts is examined during the summers of dry (2006) and wet (2007) surface conditions in the southern Great Plains. LDA is carried out using NASAs Land Information System (LIS) and the Noah LSM through an ensemble Kalman filter (EnKF) approach. The impacts of LDA on the 1) soil moisture and soil temperature initial conditions for WRF, 2) land-atmosphere coupling characteristics, and 3) ambient weather of the coupled LIS-WRF simulations are then assessed. Results show that impacts of soil moisture LDA during the spin-up can significantly modify LSM states and fluxes, depending on regime and season. Results also indicate that the use of seasonal cumulative distribution functions (CDFs) is more advantageous compared to the traditional annual CDF bias correction strategies. LDA performs consistently regardless of atmospheric forcing applied, with greater improvements seen when using coarser, global forcing products. Downstream impacts on coupled simulations vary according to the strength of the LDA impact at the initialization, where significant modifications to the soil moisture flux- PBL-ambient weather process chain are observed. Overall, this study demonstrates potential for future, higher-resolution soil moisture assimilation applications in weather and climate research.

  9. THE APPLICATION OF A STATISTICAL DOWNSCALING PROCESS TO DERIVE 21{sup ST} CENTURY RIVER FLOW PREDICTIONS USING A GLOBAL CLIMATE SIMULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werth, D.; Chen, K. F.

    2013-08-22

    The ability of water managers to maintain adequate supplies in coming decades depends, in part, on future weather conditions, as climate change has the potential to alter river flows from their current values, possibly rendering them unable to meet demand. Reliable climate projections are therefore critical to predicting the future water supply for the United States. These projections cannot be provided solely by global climate models (GCMs), however, as their resolution is too coarse to resolve the small-scale climate changes that can affect hydrology, and hence water supply, at regional to local scales. A process is needed to ‘downscale’ themore » GCM results to the smaller scales and feed this into a surface hydrology model to help determine the ability of rivers to provide adequate flow to meet future needs. We apply a statistical downscaling to GCM projections of precipitation and temperature through the use of a scaling method. This technique involves the correction of the cumulative distribution functions (CDFs) of the GCM-derived temperature and precipitation results for the 20{sup th} century, and the application of the same correction to 21{sup st} century GCM projections. This is done for three meteorological stations located within the Coosa River basin in northern Georgia, and is used to calculate future river flow statistics for the upper Coosa River. Results are compared to the historical Coosa River flow upstream from Georgia Power Company’s Hammond coal-fired power plant and to flows calculated with the original, unscaled GCM results to determine the impact of potential changes in meteorology on future flows.« less

  10. Optical Spectroscopy of Distant Red Galaxies

    NASA Astrophysics Data System (ADS)

    Wuyts, Stijn; van Dokkum, Pieter G.; Franx, Marijn; Förster Schreiber, Natascha M.; Illingworth, Garth D.; Labbé, Ivo; Rudnick, Gregory

    2009-11-01

    We present optical spectroscopic follow-up of a sample of distant red galaxies (DRGs) with K tot s,Vega < 22.5, selected by (J - K)Vega>2.3, in the Hubble Deep Field South (HDFS), the MS 1054-03 field, and the Chandra Deep Field South (CDFS). Spectroscopic redshifts were obtained for 15 DRGs. Only two out of 15 DRGs are located at z < 2, suggesting a high efficiency to select high-redshift sources. From other spectroscopic surveys in the CDFS targeting intermediate to high-redshift populations selected with different criteria, we find spectroscopic redshifts for a further 30 DRGs. We use the sample of spectroscopically confirmed DRGs to establish the high quality (scatter in Δz/(1 + z) of ~0.05) of their photometric redshifts in the considered deep fields, as derived with EAZY. Combining the spectroscopic and photometric redshifts, we find that 74% of DRGs with K tot s,Vega < 22.5 lie at z>2. The combined spectroscopic and photometric sample is used to analyze the distinct intrinsic and observed properties of DRGs at z < 2 and z>2. In our photometric sample to K tot s,Vega < 22.5, low-redshift DRGs are brighter in Ks than high-redshift DRGs by 0.7 mag, and more extincted by 1.2 mag in AV . Our analysis shows that the DRG criterion selects galaxies with different properties at different redshifts. Such biases can be largely avoided by selecting galaxies based on their rest-frame properties, which requires very good multi-band photometry and high quality photometric redshifts.

  11. A probabilistic approach to aircraft design emphasizing stability and control uncertainties

    NASA Astrophysics Data System (ADS)

    Delaurentis, Daniel Andrew

    In order to address identified deficiencies in current approaches to aerospace systems design, a new method has been developed. This new method for design is based on the premise that design is a decision making activity, and that deterministic analysis and synthesis can lead to poor, or misguided decision making. This is due to a lack of disciplinary knowledge of sufficient fidelity about the product, to the presence of uncertainty at multiple levels of the aircraft design hierarchy, and to a failure to focus on overall affordability metrics as measures of goodness. Design solutions are desired which are robust to uncertainty and are based on the maximum knowledge possible. The new method represents advances in the two following general areas. 1. Design models and uncertainty. The research performed completes a transition from a deterministic design representation to a probabilistic one through a modeling of design uncertainty at multiple levels of the aircraft design hierarchy, including: (1) Consistent, traceable uncertainty classification and representation; (2) Concise mathematical statement of the Probabilistic Robust Design problem; (3) Variants of the Cumulative Distribution Functions (CDFs) as decision functions for Robust Design; (4) Probabilistic Sensitivities which identify the most influential sources of variability. 2. Multidisciplinary analysis and design. Imbedded in the probabilistic methodology is a new approach for multidisciplinary design analysis and optimization (MDA/O), employing disciplinary analysis approximations formed through statistical experimentation and regression. These approximation models are a function of design variables common to the system level as well as other disciplines. For aircraft, it is proposed that synthesis/sizing is the proper avenue for integrating multiple disciplines. Research hypotheses are translated into a structured method, which is subsequently tested for validity. Specifically, the implementation involves the study of the relaxed static stability technology for a supersonic commercial transport aircraft. The probabilistic robust design method is exercised resulting in a series of robust design solutions based on different interpretations of "robustness". Insightful results are obtained and the ability of the method to expose trends in the design space are noted as a key advantage.

  12. Relative congener scaling of Polychlorinated dibenzo-p-dioxins and dibenzofurans to estimate building fire contributions in air, surface wipes, and dust samples.

    PubMed

    Pleil, Joachim D; Lorber, Matthew N

    2007-11-01

    The United States Environmental Protection Agency collected ambient air samples in lower Manhattan for about 9 months following the September 11, 2001 World Trade Center (WTC) attacks. Measurements were made of a host of airborne contaminants including volatile organic compounds, polycyclic aromatic hydrocarbons, asbestos, lead, and other contaminants of concern. The present study focuses on the broad class of polychlorinated dibenzo-p-dioxins (CDDs) and dibenzofurans (CDFs) with specific emphasis on the 17 CDD/CDF congeners that exhibit mammalian toxicity. This work is a statistical study comparing the internal patterns of CDD/CDFs using data from an unambiguous fire event (WTC) and other data sets to help identify their sources. A subset of 29 samples all taken between September 16 and October 31, 2001 were treated as a basis set known to be heavily impacted by the WTC building fire source. A second basis set was created using data from Los Angeles and Oakland, CA as published by the California Air Resources Board (CARB) and treated as the archetypical background pattern for CDD/CDFs. The CARB data had a congener profile appearing similar to background air samples from different locations in America and around the world and in different matrices, such as background soils. Such disparate data would normally be interpreted with a qualitative pattern recognition based on congener bar graphs or other forms of factor or cluster analysis that group similar samples together graphically. The procedure developed here employs aspects of those statistical methods to develop a single continuous output variable per sample. Specifically, a form of variance structure-based cluster analysis is used to group congeners within samples to reduce collinearity in the basis sets, new variables are created based on these groups, and multivariate regression is applied to the reduced variable set to determine a predictive equation. This equation predicts a value for an output variable, OPT: the predicted value of OPT is near zero (0.00) for a background congener profile and near one (1.00) forthe profile characterized by the WTC air profile. Although this empirical method is calibrated with relatively small sets of airborne samples, it is shown to be generalizable to other WTC, fire source, and background air samples as well as other sample matrices including soils, window films and other dust wipes, and bulk dusts. However, given the limited data set examined, the method does not allow further discrimination between the WTC data and the other fire sources. This type of analysis is demonstrated to be useful for complex trace-level data sets with limited data and some below-detection entries.

  13. The Chandra Deep Field-South Survey: 7 Ms Source Catalogs

    NASA Technical Reports Server (NTRS)

    Luo, B.; Brandt, W. N.; Xue, Y. Q.; Lehmer, B.; Alexander, D. M.; Bauer, F. E.; Vito, F.; Yang, G.; Basu-Zych, A. R.; Comastri, A.; hide

    2016-01-01

    We present X-ray source catalogs for the approx. 7 Ms exposure of the Chandra Deep Field-South (CDF-S), which covers a total area of 484.2 arcmin2. Utilizing WAVDETECT for initial source detection and ACIS Extract for photometric extraction and significance assessment, we create a main source catalog containing 1008 sources that are detected in up to three X-ray bands: 0.5-7.0 keV, 0.5-2.0 keV, and 2-7 keV. A supplementary source catalog is also provided, including 47 lower-significance sources that have bright (Ks < or = 23) near-infrared counterparts. We identify multiwavelength counterparts for 992 (98.4%) of the main-catalog sources, and we collect redshifts for 986 of these sources, including 653 spectroscopic redshifts and 333 photometric redshifts. Based on the X-ray and multiwavelength properties, we identify 711 active galactic nuclei (AGNs) from the main-catalog sources. Compared to the previous approx. 4 Ms CDF-S catalogs, 291 of the main-catalog sources are new detections. We have achieved unprecedented X-ray sensitivity with average flux limits over the central approx. 1 arcmin2 region of 1.9 x 10(exp -17), 6.4 x 10(exp -18), and 2.7 x 10(exp -17) erg/sq cm/s in the three X-ray bands, respectively. We provide cumulative number-count measurements observing, for the first time, that normal galaxies start to dominate the X-ray source population at the faintest 0.5-2.0 keV flux levels. The highest X-ray source density reaches approx. 50,500/sq deg, and 47% +/- 4 of these sources are AGNs (approx. 23,900/sq deg).

  14. The Evolution of Normal Galaxy X-Ray Emission Through Cosmic History: Constraints from the 6 MS Chandra Deep Field-South

    NASA Technical Reports Server (NTRS)

    Lehmer, B. D.; Basu-Zych, A. R.; Mineo, S.; Brandt, W. N.; Eurfrasio, R. T.; Fragos, T.; Hornschemeier, A. E.; Lou, B.; Xue, Y. Q.; Bauer, F. E.; hide

    2016-01-01

    We present measurements of the evolution of normal-galaxy X-ray emission from z (is) approx. 0-7 using local galaxies and galaxy samples in the approx. 6 Ms Chandra Deep Field-South (CDF-S) survey. The majority of the CDF-S galaxies are observed at rest-frame energies above 2 keV, where the emission is expected to be dominated by X-ray binary (XRB) populations; however, hot gas is expected to provide small contributions to the observed-frame (is) less than 1 keV emission at z (is) less than 1. We show that a single scaling relation between X-ray luminosity (L(sub x)) and star-formation rate (SFR) literature, is insufficient for characterizing the average X-ray emission at all redshifts. We establish that scaling relations involving not only SFR, but also stellar mass and redshift, provide significantly improved characterizations of the average X-ray emission from normal galaxy populations at z (is) approx. 0-7. We further provide the first empirical constraints on the redshift evolution of X-ray emission from both low-mass XRB (LMXB) and high-mass XRB (HMXB) populations and their scalings with stellar mass and SFR, respectively. We find L2 -10 keV(LMXB)/stellar mass alpha (1+z)(sub 2-3) and L2 -10 keV(HMXB)/SFR alpha (1+z), and show that these relations are consistent with XRB population-synthesis model predictions, which attribute the increase in LMXB and HMXB scaling relations with redshift as being due to declining host galaxy stellar ages and metallicities, respectively. We discuss how emission from XRBs could provide an important source of heating to the intergalactic medium in the early universe, exceeding that of active galactic nuclei.

  15. The Great Observatories Origins Deep Survey (GOODS): Overview and Status

    NASA Astrophysics Data System (ADS)

    Hook, R. N.; GOODS Team

    2002-12-01

    GOODS is a very large project to gather deep imaging data and spectroscopic followup of two fields, the Hubble Deep Field North (HDF-N) and the Chandra Deep Field South (CDF-S), with both space and ground-based instruments to create an extensive multiwavelength public data set for community research on the distant Universe. GOODS includes a SIRTF Legacy Program (PI: Mark Dickinson) and a Hubble Treasury Program of ACS imaging (PI: Mauro Giavalisco). The ACS imaging was also optimized for the detection of high-z supernovae which are being followed up by a further target of opportunity Hubble GO Program (PI: Adam Riess). The bulk of the CDF-S ground-based data presently available comes from an ESO Large Programme (PI: Catherine Cesarsky) which includes both deep imaging and multi-object followup spectroscopy. This is currently complemented in the South by additional CTIO imaging. Currently available HDF-N ground-based data forming part of GOODS includes NOAO imaging. Although the SIRTF part of the survey will not begin until later in the year the ACS imaging is well advanced and there is also a huge body of complementary ground-based imaging and some follow-up spectroscopy which is already publicly available. We summarize the current status of GOODS and give an overview of the data products currently available and present the timescales for the future. Many early science results from the survey are presented in other GOODS papers at this meeting. Support for the HST GOODS program presented here and in companion abstracts was provided by NASA thorugh grant number GO-9425 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.

  16. FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems

    NASA Astrophysics Data System (ADS)

    Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.

    2016-12-01

    Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.

  17. Dioxins and furans formation in pilot incineration tests of sewage sludge spiked with organic chlorine.

    PubMed

    Mininni, Giuseppe; Sbrilli, Andrea; Guerriero, Ettore; Rotatori, Mauro

    2004-03-01

    The factors affecting polychlorinated dibenzo-p-dioxins (PCDD) and polychlorinated dibenzofurans (PCDF) formation were studied in sewage sludge incineration tests carried out on a demonstrative plant. The plant includes a circulating fluidised bed furnace (FBF) and a rotary kiln furnace (RKF), operating alternatively. During the tests sewage sludge was spiked with chlorinated hydrocarbons and the operating parameters of the afterburning chamber were varied. PCDD/F were sampled in each test before the bag filter, thus collecting the above contaminants before abatement systems. From the tests it appeared that PCDD/F were always produced in more abundance in the tests carried out by FBF than by RKF. The higher PCDD/F concentrations in the tests by FBF were reached when sewage sludge was spiked with a high dosage of a surrogate organic mixture of chlorinated hydrocarbons and when the afterburning chamber was used only as transit equipment with the burner off. The distribution of the different PCDD/F homologues was compared. P5CDFs were generally the prevalent fraction, with very few exceptions for the tests by RKF at high temperature of the afterburning chamber. As for FBF tests, it was found that the PCDD/F homologue profile depends on the afterburning chamber temperature.

  18. The Chandra Deepest Fields in the Infrared: Making the Connection between Normal Galaxies and AGN

    NASA Astrophysics Data System (ADS)

    Grogin, N. A.; Ferguson, H. C.; Dickinson, M. E.; Giavalisco, M.; Mobasher, B.; Padovani, P.; Williams, R. E.; Chary, R.; Gilli, R.; Heckman, T. M.; Stern, D.; Winge, C.

    2001-12-01

    Within each of the two Chandra Deepest Fields (CDFs), there are ~10'x15' regions targeted for non-proprietary, deep SIRTF 3.6--24μ m imaging as part of the Great Observatories Origins Deep Survey (GOODS) Legacy program. In advance of the SIRTF observations, the GOODS team has recently begun obtaining non-proprietary, deep ground-based optical and near-IR imaging and spectroscopy over these regions, which contain virtually all of the current ≈1 Msec CXO coverage in the CDF North and much of the ≈1 Msec coverage in the CDF South. In particular, the planned depth of the near-IR imaging (JAB ~ 25.3; HAB ~ 24.8; KAB ~ 24.4) combined with the deep Chandra data can allow us to trace the evolutionary connection between normal galaxies, starbursts, and AGN out to z ~ 1 and beyond. We describe our CDF Archival program, which is integrating these GOODS-supporting observations together with the CDF archival data and other publicly-available datasets in these regions to create a multi-wavelength deep imaging and spectroscpic database available to the entire community. We highlight progress toward near-term science goals of this program, including: (a) pushing constraints on the redshift distribution and spectral-energy distributions of the faintest X-ray sources to the deepest possible levels via photometric redshifts; and (b) better characterizing the heavily-obscured and the high-redshift populations via both a near-IR search for optically-undetected CDF X-ray sources and also X-ray stacking analyses on the CXO-undetected EROs in these fields.

  19. ArrayInitiative - a tool that simplifies creating custom Affymetrix CDFs

    PubMed Central

    2011-01-01

    Background Probes on a microarray represent a frozen view of a genome and are quickly outdated when new sequencing studies extend our knowledge, resulting in significant measurement error when analyzing any microarray experiment. There are several bioinformatics approaches to improve probe assignments, but without in-house programming expertise, standardizing these custom array specifications as a usable file (e.g. as Affymetrix CDFs) is difficult, owing mostly to the complexity of the specification file format. However, without correctly standardized files there is a significant barrier for testing competing analysis approaches since this file is one of the required inputs for many commonly used algorithms. The need to test combinations of probe assignments and analysis algorithms led us to develop ArrayInitiative, a tool for creating and managing custom array specifications. Results ArrayInitiative is a standalone, cross-platform, rich client desktop application for creating correctly formatted, custom versions of manufacturer-provided (default) array specifications, requiring only minimal knowledge of the array specification rules and file formats. Users can import default array specifications, import probe sequences for a default array specification, design and import a custom array specification, export any array specification to multiple output formats, export the probe sequences for any array specification and browse high-level information about the microarray, such as version and number of probes. The initial release of ArrayInitiative supports the Affymetrix 3' IVT expression arrays we currently analyze, but as an open source application, we hope that others will contribute modules for other platforms. Conclusions ArrayInitiative allows researchers to create new array specifications, in a standard format, based upon their own requirements. This makes it easier to test competing design and analysis strategies that depend on probe definitions. Since the custom array specifications are easily exported to the manufacturer's standard format, researchers can analyze these customized microarray experiments using established software tools, such as those available in Bioconductor. PMID:21548938

  20. Correlation Between CXB and Cib: the Nature of Cib Fluctuations

    NASA Astrophysics Data System (ADS)

    Kashlinsky, Alexander

    2011-09-01

    We will analyze the 4Ms CDFS and 2 Ms CDFN data by cross-correlating them with the maps of source-subtracted Cosmic Infrared Background (CIB) fluctuations from Spitzer/IRAC. This will provide important information about the nature of the sources contributing to these CIB fluctuations.We will carefully subtract X-ray background, construct a common mask for the X-ray and CIB IRAC maps and compute the cross- and auto-correlations. Our pilot study demonstrates conclusively that this measurement is feasible and would lead to conclusive results. The results will enable to estimate the relative contributions of accreting sources, such as black holes, to the recently discovered CIB fluctuations (significant cross-correlations), and those emitting by stellar nucleosynthesis.

  1. Identifications and Photometric Redshifts of the 2 Ms Chandra Deep Field-South Sources

    NASA Astrophysics Data System (ADS)

    Luo, B.; Brandt, W. N.; Xue, Y. Q.; Brusa, M.; Alexander, D. M.; Bauer, F. E.; Comastri, A.; Koekemoer, A.; Lehmer, B. D.; Mainieri, V.; Rafferty, D. A.; Schneider, D. P.; Silverman, J. D.; Vignali, C.

    2010-04-01

    We present reliable multiwavelength identifications and high-quality photometric redshifts for the 462 X-ray sources in the ≈2 Ms Chandra Deep Field-South (CDF-S) survey. Source identifications are carried out using deep optical-to-radio multiwavelength catalogs, and are then combined to create lists of primary and secondary counterparts for the X-ray sources. We identified reliable counterparts for 442 (95.7%) of the X-ray sources, with an expected false-match probability of ≈ 6.2%; we also selected four additional likely counterparts. The majority of the other 16 X-ray sources appear to be off-nuclear sources, sources associated with galaxy groups and clusters, high-redshift active galactic nuclei (AGNs), or spurious X-ray sources. A likelihood-ratio method is used for source matching, which effectively reduces the false-match probability at faint magnitudes compared to a simple error-circle matching method. We construct a master photometric catalog for the identified X-ray sources including up to 42 bands of UV-to-infrared data, and then calculate their photometric redshifts (photo-z's). High accuracy in the derived photo-z's is accomplished owing to (1) the up-to-date photometric data covering the full spectral energy distributions (SEDs) of the X-ray sources, (2) more accurate photometric data as a result of source deblending for ≈10% of the sources in the infrared bands and a few percent in the optical and near-infrared bands, (3) a set of 265 galaxy, AGN, and galaxy/AGN hybrid templates carefully constructed to best represent all possible SEDs, (4) the Zurich Extragalactic Bayesian Redshift Analyzer used to derive the photo-z's, which corrects the SED templates to best represent the SEDs of real sources at different redshifts and thus improves the photo-z quality. The reliability of the photo-z's is evaluated using the subsample of 220 sources with secure spectroscopic redshifts. We achieve an accuracy of |Δz|/(1 + z) ≈ 1% and an outlier [with |Δz|/(1 + z)>0.15] fraction of ≈1.4% for sources with spectroscopic redshifts. We performed blind tests to derive a more realistic estimate of the photo-z quality for sources without spectroscopic redshifts. We expect there are ≈9% outliers for the relatively brighter sources (R <~ 26), and the outlier fraction will increase to ≈15%-25% for the fainter sources (R >~ 26). The typical photo-z accuracy is ≈6%-7%. The outlier fraction and photo-z accuracy do not appear to have a redshift dependence (for z ≈ 0-4). These photo-z's appear to be the best obtained so far for faint X-ray sources, and they have been significantly (gsim50%) improved compared to previous estimates of the photo-z's for the X-ray sources in the ≈2 Ms Chandra Deep Field-North and ≈1 Ms CDF-S.

  2. Compton-thick AGN at high and low redshift

    NASA Astrophysics Data System (ADS)

    Akylas, A.; Georgantopoulos, I.; Corral, A.; Ranalli, P.; Lanzuisi, G.

    2017-10-01

    The most obscured sources detected in X-ray surveys, the Compton-thick AGN present great interest both because they represent the hidden side of accretion but also because they may signal the AGN birth. We analyse the NUSTAR observations from the serendipitous observations in order to study the Compton-thick AGN at the deepest possible ultra-hard band (>10 keV). We compare our results with our SWIFT/BAT findings in the local Universe, as well as with our results in the CDFS and COSMOS fields. We discuss the comparison with X-ray background synthesis models finding that a low fraction of Compton-thick sources (about 15 per cent of the obscured population) is compatible with both the 2-10keV band results and those at harder energies.

  3. UV to IR Luminosities and Dust Attenuation Determined from ~4000 K-selected Galaxies at 1 < z < 3 in the ZFOURGE Survey

    NASA Astrophysics Data System (ADS)

    Forrest, Ben; Tran, Kim-Vy H.; Tomczak, Adam R.; Broussard, Adam; Labbé, Ivo; Papovich, Casey; Kriek, Mariska; Allen, Rebecca J.; Cowley, Michael; Dickinson, Mark; Glazebrook, Karl; van Houdt, Josha; Inami, Hanae; Kacprzak, Glenn G.; Kawinwanichakij, Lalitwadee; Kelson, Daniel; McCarthy, Patrick J.; Monson, Andrew; Morrison, Glenn; Nanayakkara, Themiya; Persson, S. Eric; Quadri, Ryan F.; Spitler, Lee R.; Straatman, Caroline; Tilvi, Vithal

    2016-02-01

    We build a set of composite galaxy spectral energy distributions (SEDs) by de-redshifting and scaling multi-wavelength photometry from galaxies in the ZFOURGE survey, covering the CDFS, COSMOS, and UDS fields. From a sample of ˜4000 Ks-band selected galaxies, we define 38 composite galaxy SEDs that yield continuous low-resolution spectra (R ˜ 45) over the rest-frame range 0.1-4 μm. Additionally, we include far infrared photometry from the Spitzer Space Telescope and the Herschel Space Observatory to characterize the infrared properties of our diverse set of composite SEDs. From these composite SEDs we analyze the rest-frame UVJ colors, as well as the ratio of IR to UV light (IRX) and the UV slope (β) in the IRX-β dust relation at 1 < z < 3. Blue star-forming composite SEDs show IRX and β values consistent with local relations; dusty star-forming galaxies have considerable scatter, as found for local IR bright sources, but on average appear bluer than expected for their IR fluxes. We measure a tight linear relation between rest-frame UVJ colors and dust attenuation for star-forming composites, providing a direct method for estimating dust content from either (U - V) or (V-J) rest-frame colors for star-forming galaxies at intermediate redshifts. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.

  4. An investigation of the cosmic diffuse X-ray background

    NASA Astrophysics Data System (ADS)

    John, Tomykkutty Velliyedathu

    2016-03-01

    The cosmic diffuse X-ray background (CXB), which is only second to the cosmic microwave background (CMB) in prominence, has challenged astrophysicists ever since its serendipitous discovery in 1962. In the past five decades, we have made considerable progress unraveling its mysterious origins. Nevertheless, precise identification of its various components and their individual contributions still remains a puzzling task. The bulk of the XRB comes from the integrated flux of the most luminous astronomical objects- Active Galactic Nuclei (AGN)- as well as the emission from starburst and normal galaxies and can account for most of the emission above 1 keV. In the energy range below 1 keV, several components can be identified besides the dominant extragalactic component. While two thermal components, one at about one million K and the other at about 2.3 million K adequately account for the emission from hot gas in collisional ionization equilibrium, solar wind charge exchange (SWCX) makes a substantial contribution to the SXRB. One of the biggest challenges is to separate the contributions of individual components. This is made difficult by the fact that the spectral structure of all the Galactic components is similar. Shadow experiments have been used to discriminate the various constituents; however, these have only limited use owing to their dependence on estimates of cloud parameters. The best way to make reliable inferences on the contributions of DXB components is to apply good models to valid data with high statistics. With this in mind, for this work, we selected high quality data, from the well-surveyed sky direction- the Chandra Deep Field South (CDF-S)- with 4 Ms of observing time, analyzed them and using several models, derived the important parameters for the various DXB constituents obtaining very good constraints. In addition, we used the same data, spread over a period of nine years, to make a systematic analysis of the temporal variation of heliospheric SWCX. Finally, using the results of the DXB analysis we extracted the spectra of the Chandra point sources of the CDF-S and obtained important information about the spectral parameters for the different source types.

  5. Chandra and the VLT Jointly Investigate the Cosmic X-Ray Background

    NASA Astrophysics Data System (ADS)

    2001-03-01

    Summary Important scientific advances often happen when complementary investigational techniques are brought together . In the present case, X-ray and optical/infrared observations with some of the world's foremost telescopes have provided the crucial information needed to solve a 40-year old cosmological riddle. Very detailed observations of a small field in the southern sky have recently been carried out, with the space-based NASA Chandra X-Ray Observatory as well as with several ground-based ESO telescopes, including the Very Large Telescope (VLT) at the Paranal Observatory (Chile). Together, they have provided the "deepest" combined view at X-ray and visual/infrared wavelengths ever obtained into the distant Universe. The concerted observational effort has already yielded significant scientific results. This is primarily due to the possibility to 'identify' most of the X-ray emitting objects detected by the Chandra X-ray Observatory on ground-based optical/infrared images and then to determine their nature and distance by means of detailed (spectral) observations with the VLT . In particular, there is now little doubt that the so-called 'X-ray background' , a seemingly diffuse short-wave radiation first detected in 1962, in fact originates in a vast number of powerful black holes residing in active nuclei of distant galaxies . Moreover, the present investigation has permitted to identify and study in some detail a prime example of a hitherto little known type of object, a distant, so-called 'Type II Quasar' , in which the central black hole is deeply embedded in surrounding gas and dust. These achievements are just the beginning of a most fruitful collaboration between "space" and "ground". It is yet another impressive demonstration of the rapid progress of modern astrophysics, due to the recent emergence of a new generation of extremely powerful instruments. PR Photo 09a/01 : Images of a small part of the Chandra Deep Field South , obtained with ESO telescopes in three different wavebands. PR Photo 09b/01 : A VLT/FORS1 spectrum of a 'Type II Quasar' discovered during this programme. The 'Chandra Deep Field South' and the X-Ray Background ESO PR Photo 09a/01 ESO PR Photo 09a/01 [Preview - JPEG: 400 x 183 pix - 76k] [Normal - JPEG: 800 x 366 pix - 208k] [Hires - JPEG: 3000 x 1453 pix - 1.4M] Caption : PR Photo 09a/01 shows optical/infrared images in three wavebands ('Blue', 'Red', 'Infrared') from ESO telescopes of the Type II Quasar CXOCDFS J033229.9 -275106 (at the centre), one of the distant X-ray sources identified in the Chandra Deep Field South (CDFS) area during the present study. Technical information about these photos is available below. The 'Chandra Deep Field South (CDFS)' is a small sky area in the southern constellation Fornax (The Oven). It measures about 16 arcmin across, or roughly half the diameter of the full moon. There is unusually little gas and dust within the Milky Way in this direction and observations towards the distant Universe within this field thus profit from an particularly clear view. That is exactly why this sky area was selected by an international team of astronomers [1] to carry out an ultra-deep survey of X-ray sources with the orbiting Chandra X-Ray Observatory . In order to detect the faintest possible sources, NASA's satellite telescope looked in this direction during an unprecedented total of almost 1 million seconds of exposure time (11.5 days). The main scientific goal of this survey is to understand the nature and evolution of the elusive sources that make up the 'X-ray background' . This diffuse glare in the X-ray sky was discovered by Riccardo Giacconi and his collaborators during a pioneering rocket experiment in 1962. The excellent imaging quality of Chandra (the angular resolution is about 1 arcsec) makes it possible to do extremely deep exposures without encountering problems introduced by the "confusion effect". This refers to the overlapping of images of sources that are seen close to each other in the sky and thus are difficult to study individually. Previous X-ray satellites were not able to obtain sufficiently sharp X-ray images and the earlier deep X-ray surveys therefore suffered severely from this effect. Moreover, Chandra has much better sensitivity at shorter wavelengths (higher energies) which are less affected by obscuration effects. It can therefore better detect faint sources that emit very energetic ("hard") X-rays. X-ray and optical surveys in the Chandra Deep Field South The one-million second Chandra observations were completed in December 2000. In parallel, a group of astronomers based at institutes in Europe and the USA (the CFDS-team [1]) has been collecting deep images and extensive spectroscopic data with the VLT during the past 2 years (cf. PR Photo 09a/01 ). Their aim was to 'identify' the Chandra X-ray sources, i.e., to unveil their nature and measure their distances. For the identification of these sources, the team has also made extensive use of the observations that were carried out as a part of the comprehensive ESO Imaging Survey Project (EIS). More than 300 X-ray sources were detected in the CDFS by Chandra . A significant fraction of these objects shine so faintly in the optical and near-infrared wavebands that only long-exposure observations with the VLT have been able to detect them. During five observing nights with the FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope in October and November 2000, the CDFS team was able to identify and obtain spectra of more than one hundred of the X-ray sources registered by Chandra . Nature of the X-ray sources The first results from this study have now confirmed that the 'hard' X-ray background is mainly due to Active Galactic Nuclei (AGN) . The observations also reveal that a large fraction of them are of comparatively low brightness (referred to as 'low-luminosity AGN'), heavily enshrouded by dust and located at distances of 8,000 - 9,000 million light-years (corresponding to a redshift of about 1 and a look-back time of 57% of the age of the Universe [2]) . It is generally believed that all these sources are powered by massive black holes at their centres. Previous X-ray surveys missed most of these objects because they were too faint to be observed by the telescopes then available, in particular at short X-ray wavelengths ('hard X-ray photons') where more radiation from the highly active centres is able to pass through the surrounding, heavily absorbing gas and dust clouds. Other types of well-known X-ray sources, e.g., QSOs ('quasars' = high-luminosity AGN) as well as clusters or groups of galaxies were also detected during these observations. Studies of all classes of objects in the CDFS are also being carried out by several other European groups. This sky field, already a standard reference in the southern hemisphere, will be the subject of several multi-wavelength investigations for many years to come. A prime example will be the Great Observatories Origins Deep Survey (GOODS) which will be carried out by the NASA SIRTF infrared satellite in 2003. Discovery of a distant Type II Quasar ESO PR Photo 09b/01 ESO PR Photo 09b/01 [Preview - JPEG: 400 x 352 pix - 56k] [Normal - JPEG: 800 x 703 pix - 128k] Caption : PR Photo 09b/01 displays the optical spectrum of the distant Type II Quasar CXOCDFS J033229.9 -275106 in the Chandra Deep Field South (CDFS), obtained with the FORS1 multi-mode instrument at VLT ANTU. Strong, redshifted emission lines of Hydrogen and ionised Helium, Oxygen, Nitrogen and Carbon are marked. Technical information about this photo is available below. One particular X-ray source that was identified with the VLT during the present investigation has attracted much attention - it is the discovery of a dust-enshrouded quasar (QSO) at very high redshift ( z = 3.7, corresponding to a distance of about 12,000 million light-years; [2]), cf. PR Photo 09a/01 and PR Photo 09b/01 . It is the first very distant representative of this elusive class of objects (referred to as ' Type II Quasars ') which are believed to account for approximately 90% of the black-hole-powered quasars in the distant Universe. The 'sum' of the identified Chandra X-ray sources in the CDFS was found to match both the intensity and the spectral properties of the observed X-ray background. This important result is a significant step forward towards the definitive resolution of this long-standing cosmological problem. Naturally, ESO astronomer Piero Rosati and his colleagues are thrilled: " It is clearly the combination of the new and detailed Chandra X-ray observations and the enormous light-gathering power of the VLT that has been instrumental to this success. " However, he says, " the identification of the remaining Chandra X-ray sources will be the next challenge for the VLT since they are extremely faint. This is because they are either heavily obscured by dust or because they are extremely distant ". More Information This Press Release is issued simultaneously with a NASA Press Release (see also the Harvard site ). Some of the first results are described in a research paper ("First Results from the X-ray and Optical Survey of the Chandra Deep Field South" available on the web at astro-ph/0007240. More information about science results from the Chandra X-Ray Observatory may be found at: http://asc.harvard.edu/. The optical survey of CDFS at ESO with the Wide-Field Imager is described in connection with PR Photos 46a-b/99 ('100,000 galaxies at a glance'). An image of the Chandra Deep Field South is available at the ESO website on the EIS Image Gallery webpage. . Notes [1]: The Chandra Team is lead by Riccardo Giacconi (Association of Universities Inc. [AUI], Washington, USA) and includes: Piero Rosati , Jacqueline Bergeron , Roberto Gilmozzi , Vincenzo Mainieri , Peter Shaver (European Southern Observatory [ESO]), Paolo Tozzi , Mario Nonino , Stefano Borgani (Osservatorio Astronomico, Trieste, Italy), Guenther Hasinger , Gyula Szokoly (Astrophysical Institute Potsdam [AIP], Germany), Colin Norman , Roberto Gilli , Lisa Kewley , Wei Zheng , Andrew Zirm , JungXian Wang (Johns Hopkins University [JHU], Baltimore, USA), Ken Kellerman (National Radio Astronomy Observatory [NRAO], Charlottesville, USA), Ethan Schreier , Anton Koekemoer and Norman Grogin (Space Telescope Science Institute (STScI), Baltimore, USA). [2] In astronomy, the redshift denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy or quasar gives a direct estimate of the apparent recession velocity as caused by the universal expansion. Since the expansion rate increases with the distance, the velocity is itself a function (the Hubble relation) of the distance to the object. Redshifts of 1 and 3.7 correspond to when the Universe was about 43% and 12% of its present age. The distances indicated in this Press Release depend on the cosmological model chosen and are based on an age of 19,000 million years. Technical information about the photos PR Photo 09a/01 shows B-, R- and I-band images of a 20 x 20 arcsec 2 area within the CDFS, centred on the Type II Quasar CXOCDFS J033229.9 -275106 . They were obtained with the MPG/ESO 2.2-m telescope and the Wide-Field Imager (WFI) at La Silla (B-band; 8 hrs exposure time) and the 8.2-m VLT ANTU telescope with the FORS1 multi-mode instrument at Paranal (R- and I-bands; each 2 hrs exposure). The measured magnitudes are R=23.5 and I=22.7. The overlaid contours show the associated Chandra X-ray source (smoothed with a sigma = 1 arcsec gaussian profile). North is up and East is left. The spectrum shown in PR Photo 09b/01 was obtained on November 25, 2000, with VLT ANTU and FORS1 in the multislit mode (150-I grism, 1.2 arcsec slit). The exposure time was 3 hours.

  6. Sensitivity to grid resolution in the ability of a chemical transport model to simulate observed oxidant chemistry under high-isoprene conditions

    NASA Astrophysics Data System (ADS)

    Yu, Karen; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Miller, Christopher C.; Travis, Katherine R.; Zhu, Lei; Yantosca, Robert M.; Sulprizio, Melissa P.; Cohen, Ron C.; Dibb, Jack E.; Fried, Alan; Mikoviny, Tomas; Ryerson, Thomas B.; Wennberg, Paul O.; Wisthaler, Armin

    2016-04-01

    Formation of ozone and organic aerosol in continental atmospheres depends on whether isoprene emitted by vegetation is oxidized by the high-NOx pathway (where peroxy radicals react with NO) or by low-NOx pathways (where peroxy radicals react by alternate channels, mostly with HO2). We used mixed layer observations from the SEAC4RS aircraft campaign over the Southeast US to test the ability of the GEOS-Chem chemical transport model at different grid resolutions (0.25° × 0.3125°, 2° × 2.5°, 4° × 5°) to simulate this chemistry under high-isoprene, variable-NOx conditions. Observations of isoprene and NOx over the Southeast US show a negative correlation, reflecting the spatial segregation of emissions; this negative correlation is captured in the model at 0.25° × 0.3125° resolution but not at coarser resolutions. As a result, less isoprene oxidation takes place by the high-NOx pathway in the model at 0.25° × 0.3125° resolution (54 %) than at coarser resolution (59 %). The cumulative probability distribution functions (CDFs) of NOx, isoprene, and ozone concentrations show little difference across model resolutions and good agreement with observations, while formaldehyde is overestimated at coarse resolution because excessive isoprene oxidation takes place by the high-NOx pathway with high formaldehyde yield. The good agreement of simulated and observed concentration variances implies that smaller-scale non-linearities (urban and power plant plumes) are not important on the regional scale. Correlations of simulated vs. observed concentrations do not improve with grid resolution because finer modes of variability are intrinsically more difficult to capture. Higher model resolution leads to decreased conversion of NOx to organic nitrates and increased conversion to nitric acid, with total reactive nitrogen oxides (NOy) changing little across model resolutions. Model concentrations in the lower free troposphere are also insensitive to grid resolution. The overall low sensitivity of modeled concentrations to grid resolution implies that coarse resolution is adequate when modeling continental boundary layer chemistry for global applications.

  7. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  8. Heavily Obscured AGN: An Ideal Laboratory To Study The Early Co-Evolution of Galaxies And Black Holes

    NASA Astrophysics Data System (ADS)

    Circosta, Chiara; Vignali, C.; Gilli, R.; Feltre, A.; Vito, F.

    2016-10-01

    Obscured AGN are a crucial ingredient to understand the full growth history of super massive black holes and the coevolution with their host galaxies, since they constitute the bulk of the BH accretion. In the distant Universe, many of them are hosted by submillimeter galaxies (SMGs), characterized by a high production of stars and a very fast consumption of gas. Therefore, the analysis of this class of objects is fundamental to investigate the role of the ISM in the early coevolution of galaxies and black holesWe collected a sample of six obscured X-ray selected AGN at z>2.5 in the CDF-S, detected in the far-IR/submm bands. We performed a multiwavelength analysis in order to characterize their physical properties, as well as those of their host galaxies (e.g. column density, accretion luminosity, stellar mass, SFR, dust and gas mass). I will present the results of the X-ray spectral analysis of these sources based on the 7Ms Chandra data - the deepest X-ray observation ever carried out on any field - along with their broad-band spectral energy distributions (SEDs), built up using the public UV to far-IR photometry from the CANDELS and Herschel catalogs. By comparing the column density associated with the ISM (estimated measuring the size of the system) with that obtained from the X-ray data, it is possible to understand whether the ISM in the host galaxy may be able to produce a substantial part of the observed nuclear obscuration.

  9. Spectral Classification of Galaxies at 0.5 <= z <= 1 in the CDFS: The Artificial Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Teimoorinia, H.

    2012-12-01

    The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 < z < 1. The sample is selected from the spectroscopic campaign of the ESO/GOODS-South field, with 142 sources having photometric data from the GOODS-MUSIC catalog covering bands between ~0.4 and 24 μm in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectral models presented by Kinney et al. as targets in the wavelength interval ~1200-7500 Å. Then a series of neural networks are trained, with average performance ~90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.

  10. SPECTRAL CLASSIFICATION OF GALAXIES AT 0.5 {<=} z {<=} 1 IN THE CDFS: THE ARTIFICIAL NEURAL NETWORK APPROACH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teimoorinia, H., E-mail: hteimoo@uvic.ca

    2012-12-01

    The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 < z < 1. The sample is selected from the spectroscopic campaign of the ESO/GOODS-South field, with 142 sources having photometric data from the GOODS-MUSIC catalog covering bands between {approx}0.4 and 24 {mu}m in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectralmore » models presented by Kinney et al. as targets in the wavelength interval {approx}1200-7500 A. Then a series of neural networks are trained, with average performance {approx}90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.« less

  11. MicroRNA319-regulated TCPs interact with FBHs and PFT1 to activate CO transcription and control flowering time in Arabidopsis.

    PubMed

    Liu, Jie; Cheng, Xiliu; Liu, Pan; Li, Dayong; Chen, Tao; Gu, Xiaofeng; Sun, Jiaqiang

    2017-05-01

    The transcription factor CONSTANS (CO) is a central component that promotes Arabidopsis flowering under long-day conditions (LDs). Here, we show that the microRNA319-regulated TEOSINTE BRANCHED/CYCLOIDEA/PCF (TCP) transcription factors promote photoperiodic flowering through binding to the CO promoter and activating its transcription. Meanwhile, these TCPs directly interact with the flowering activators FLOWERING BHLH (FBHs), but not the flowering repressors CYCLING DOF FACTORs (CDFs), to additively activate CO expression. Furthermore, both the TCPs and FBHs physically interact with the flowering time regulator PHYTOCHROME AND FLOWERING TIME 1 (PFT1) to facilitate CO transcription. Our findings provide evidence that a set of transcriptional activators act directly and additively at the CO promoter to promote CO transcription, and establish a molecular mechanism underlying the regulation of photoperiodic flowering time in Arabidopsis.

  12. MicroRNA319-regulated TCPs interact with FBHs and PFT1 to activate CO transcription and control flowering time in Arabidopsis

    PubMed Central

    Liu, Pan; Li, Dayong; Chen, Tao; Gu, Xiaofeng; Sun, Jiaqiang

    2017-01-01

    The transcription factor CONSTANS (CO) is a central component that promotes Arabidopsis flowering under long-day conditions (LDs). Here, we show that the microRNA319-regulated TEOSINTE BRANCHED/CYCLOIDEA/PCF (TCP) transcription factors promote photoperiodic flowering through binding to the CO promoter and activating its transcription. Meanwhile, these TCPs directly interact with the flowering activators FLOWERING BHLH (FBHs), but not the flowering repressors CYCLING DOF FACTORs (CDFs), to additively activate CO expression. Furthermore, both the TCPs and FBHs physically interact with the flowering time regulator PHYTOCHROME AND FLOWERING TIME 1 (PFT1) to facilitate CO transcription. Our findings provide evidence that a set of transcriptional activators act directly and additively at the CO promoter to promote CO transcription, and establish a molecular mechanism underlying the regulation of photoperiodic flowering time in Arabidopsis. PMID:28558040

  13. Environmental applications for the analysis of chlorinated dibenzo-p-dioxins and dibenzofurans using mass spectrometry/mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiner, E.J.; Schellenberg, D.H.; Taguchi, V.Y.

    1991-01-01

    A mass spectrometry/mass spectrometry-multiple reaction monitoring (MS/MS-MRM) technique for the analysis of all tetra- through octachlorinated dibenzo-p-dioxins (Cl{sub x}DD, x = 4-8) and dibenzofurans (Cl{sub x}DF, x = 4-8) has been developed at the Ministry of the Environment (MOE) utilizing a triple quadrupole mass spectrometer. Optimization of instrumental parameters using the analyte of interest in a direct insertion probe (DIP) resulted in sensitivities approaching those obtainable by high-resolution mass spectrometric (HRMS) methods. All congeners of dioxins and furans were detected in the femtogram range. Results on selected samples indicated that for some matrices, fewer chemical interferences were observed by MS/MSmore » than by HRMS. The technique used to optimize the instrument for chlorinated dibenzo-p-dioxins (CDDs) and chlorinated dibenzofurans (CDFs) analysis is adaptable to other analytes.« less

  14. Framework for reanalysis of publicly available Affymetrix® GeneChip® data sets based on functional regions of interest.

    PubMed

    Saka, Ernur; Harrison, Benjamin J; West, Kirk; Petruska, Jeffrey C; Rouchka, Eric C

    2017-12-06

    Since the introduction of microarrays in 1995, researchers world-wide have used both commercial and custom-designed microarrays for understanding differential expression of transcribed genes. Public databases such as ArrayExpress and the Gene Expression Omnibus (GEO) have made millions of samples readily available. One main drawback to microarray data analysis involves the selection of probes to represent a specific transcript of interest, particularly in light of the fact that transcript-specific knowledge (notably alternative splicing) is dynamic in nature. We therefore developed a framework for reannotating and reassigning probe groups for Affymetrix® GeneChip® technology based on functional regions of interest. This framework addresses three issues of Affymetrix® GeneChip® data analyses: removing nonspecific probes, updating probe target mapping based on the latest genome knowledge and grouping probes into gene, transcript and region-based (UTR, individual exon, CDS) probe sets. Updated gene and transcript probe sets provide more specific analysis results based on current genomic and transcriptomic knowledge. The framework selects unique probes, aligns them to gene annotations and generates a custom Chip Description File (CDF). The analysis reveals only 87% of the Affymetrix® GeneChip® HG-U133 Plus 2 probes uniquely align to the current hg38 human assembly without mismatches. We also tested new mappings on the publicly available data series using rat and human data from GSE48611 and GSE72551 obtained from GEO, and illustrate that functional grouping allows for the subtle detection of regions of interest likely to have phenotypical consequences. Through reanalysis of the publicly available data series GSE48611 and GSE72551, we profiled the contribution of UTR and CDS regions to the gene expression levels globally. The comparison between region and gene based results indicated that the detected expressed genes by gene-based and region-based CDFs show high consistency and regions based results allows us to detection of changes in transcript formation.

  15. Ground Clutter as a Monitor of Radar Stability at Kwajalein,RMI

    NASA Technical Reports Server (NTRS)

    Silberstein, David S.; Wolff, David B.; Marks, David A.; Atlas, David; Pippitt, Jason L.

    2007-01-01

    There are many applications in which the absolute and day-to-day calibration of radar sensitivity is necessary. This is particularly so in the case of quantitative radar measurements of precipitation. While absolute calibrations can be done periodically using solar radiation, variations that occur between such absolute checks are required to maintain the accuracy of the data. The authors have developed a method for h s purpose using the radar on Kwajalein Atoll, which has been used to provide a baseline calibration for control of measurements of rainfall made by the Tropical Rainfall Measuring Mission 0T.he method u ses echoes from a multiplicity of ground targets. The average clutter echoes at the lowest elevation scan have been found to be remarkably stable from hour to hour, day to day, and month to month within better than +1 dB. They vary significantly only after either deliberate system modifications, equipment failure or unknown causes. A cumulative probability distribution of echo reflectivities (Ze in dBZ) is obtained on a daily basis. This CDF includes both the precipitation and clutter echoes. Because the precipitation echoes at Kwajalein rarely exceed 45 dBZ, selecting an upper percentile of the CDF associated with intense clutter reflectivities permits monitoring of radar stability. The reflectivity level at which the CDF attains 95% is our reference. Daily measurements of the CDFs have been made since August 1999 and have been used to correct the 7 M years of measurements and thus enhance the integrity of the global record of precipitation observed by TRMM. The method also has potential applicability to other pound radar sites.

  16. Matching radio catalogues with realistic geometry: application to SWIRE and ATLAS

    NASA Astrophysics Data System (ADS)

    Fan, Dongwei; Budavári, Tamás; Norris, Ray P.; Hopkins, Andrew M.

    2015-08-01

    Cross-matching catalogues at different wavelengths is a difficult problem in astronomy, especially when the objects are not point-like. At radio wavelengths, an object can have several components corresponding, for example, to a core and lobes. Considering not all radio detections correspond to visible or infrared sources, matching these catalogues can be challenging. Traditionally, this is done by eye for better quality, which does not scale to the large data volumes expected from the next-generation of radio telescopes. We present a novel automated procedure, using Bayesian hypothesis testing, to achieve reliable associations by explicit modelling of a particular class of radio-source morphology. The new algorithm not only assesses the likelihood of an association between data at two different wavelengths, but also tries to assess whether different radio sources are physically associated, are double-lobed radio galaxies, or just distinct nearby objects. Application to the Spitzer Wide-Area Infrared Extragalactic and Australia Telescope Large Area Survey CDF-S catalogues shows that this method performs well without human intervention.

  17. Cucumber metal transport protein MTP8 confers increased tolerance to manganese when expressed in yeast and Arabidopsis thaliana

    PubMed Central

    Migocka, Magdalena; Papierniak, Anna; Maciaszczyk-Dziubińska, Ewa; Poździk, Piotr; Posyniak, Ewelina; Garbiec, Arnold; Filleur, Sophie

    2014-01-01

    Cation diffusion facilitator (CDF) proteins are ubiquitous divalent cation transporters that have been proved to be essential for metal homeostasis and tolerance in Archaebacteria, Bacteria, and Eukaryota. In plants, CDFs are designated as metal tolerance proteins (MTPs). Due to the lack of genomic resources, studies on MTPs in other plants, including cultivated crops, are lacking. Here, the identification and organization of genes encoding members of the MTP family in cucumber are described. The first functional characterization of a cucumber gene encoding a member of the Mn-CDF subgroup of CDF proteins, designated as CsMTP8 based on the highest homology to plant MTP8, is also presented. The expression of CsMTP8 in Saccharomyces cerevisiae led to increased Mn accumulation in yeast cells and fully restored the growth of mutants hypersensitive to Mn in Mn excess. Similarly, the overexpression of CsMTP8 in Arabidopsis thaliana enhanced plant tolerance to high Mn in nutrition media as well as the accumulation of Mn in plant tissues. When fused to green fluorescent protein (GFP), CsMTP8 localized to the vacuolar membranes in yeast cells and to Arabidopsis protoplasts. In cucumber, CsMTP8 was expressed almost exclusively in roots, and the level of gene transcript was markedly up-regulated or reduced under elevated Mn or Mn deficiency, respectively. Taken together, the results suggest that CsMTP8 is an Mn transporter localized in the vacuolar membrane, which participates in the maintenance of Mn homeostasis in cucumber root cells. PMID:25039075

  18. First Spectroscopic Evidence for High Ionization State and Low Oxygen Abundance in Lyα Emitters

    NASA Astrophysics Data System (ADS)

    Nakajima, Kimihiko; Ouchi, Masami; Shimasaku, Kazuhiro; Hashimoto, Takuya; Ono, Yoshiaki; Lee, Janice C.

    2013-05-01

    We present results from Keck/NIRSPEC and Magellan/MMIRS follow-up spectroscopy of Lyα emitters (LAEs) at z = 2.2 identified in our Subaru narrowband survey. We successfully detect Hα emission from seven LAEs, and perform a detailed analysis of six LAEs free from active galactic nucleus activity, two out of which, CDFS-3865 and COSMOS-30679, have [O II] and [O III] line detections. They are the first [O II]-detected LAEs at high-z, and their [O III]/[O II] ratios and R23-indices provide the first simultaneous determinations of ionization parameter and oxygen abundance for LAEs. CDFS-3865 has a very high ionization parameter (q_{ion}=2.5^{+1.7}_{-0.8} \\times 10^8 cm s-1) and a low oxygen abundance (12+log (O/H)=7.84^{+0.24}_{-0.25}) in contrast with moderate values of other high-z galaxies such as Lyman break galaxies (LBGs). COSMOS-30679 also possesses a relatively high ionization parameter (q_{ion}=8^{+10}_{-4} \\times 10^7 cm s-1) and a low oxygen abundance (12+log (O/H)=8.18^{+0.28}_{-0.28}). Both LAEs appear to fall below the mass-metallicity relation of z ~ 2 LBGs. Similarly, a low metallicity of 12 + log (O/H) < 8.4 is independently indicated for typical LAEs from a composite spectrum and the [N II]/Hα index. Such high ionization parameters and low oxygen abundances can be found in local star-forming galaxies, but this extreme local population occupies only ~0.06% of the Sloan Digital Sky Survey spectroscopic galaxy sample with a number density ~100 times smaller than that of LAEs. With their high ionization parameters and low oxygen abundances, LAEs would represent an early stage of galaxy formation dominated by massive stars in compact star-forming regions. High-q ion galaxies like LAEs would produce ionizing photons efficiently with a high escape fraction achieved by density-bounded H II regions, which would significantly contribute to cosmic reionization at z > 6. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. Based in part on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  19. Deepest Wide-Field Colour Image in the Southern Sky

    NASA Astrophysics Data System (ADS)

    2003-01-01

    LA SILLA CAMERA OBSERVES CHANDRA DEEP FIELD SOUTH ESO PR Photo 02a/03 ESO PR Photo 02a/03 [Preview - JPEG: 400 x 437 pix - 95k] [Normal - JPEG: 800 x 873 pix - 904k] [HiRes - JPEG: 4000 x 4366 pix - 23.1M] Caption : PR Photo 02a/03 shows a three-colour composite image of the Chandra Deep Field South (CDF-S) , obtained with the Wide Field Imager (WFI) camera on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile). It was produced by the combination of about 450 images with a total exposure time of nearly 50 hours. The field measures 36 x 34 arcmin 2 ; North is up and East is left. Technical information is available below. The combined efforts of three European teams of astronomers, targeting the same sky field in the southern constellation Fornax (The Oven) have enabled them to construct a very deep, true-colour image - opening an exceptionally clear view towards the distant universe . The image ( PR Photo 02a/03 ) covers an area somewhat larger than the full moon. It displays more than 100,000 galaxies, several thousand stars and hundreds of quasars. It is based on images with a total exposure time of nearly 50 hours, collected under good observing conditions with the Wide Field Imager (WFI) on the MPG/ESO 2.2m telescope at the ESO La Silla Observatory (Chile) - many of them extracted from the ESO Science Data Archive . The position of this southern sky field was chosen by Riccardo Giacconi (Nobel Laureate in Physics 2002) at a time when he was Director General of ESO, together with Piero Rosati (ESO). It was selected as a sky region towards which the NASA Chandra X-ray satellite observatory , launched in July 1999, would be pointed while carrying out a very long exposure (lasting a total of 1 million seconds, or 278 hours) in order to detect the faintest possible X-ray sources. The field is now known as the Chandra Deep Field South (CDF-S) . The new WFI photo of CDF-S does not reach quite as deep as the available images of the "Hubble Deep Fields" (HDF-N in the northern and HDF-S in the southern sky, cf. e.g. ESO PR Photo 35a/98 ), but the field-of-view is about 200 times larger. The present image displays about 50 times more galaxies than the HDF images, and therefore provides a more representative view of the universe . The WFI CDF-S image will now form a most useful basis for the very extensive and systematic census of the population of distant galaxies and quasars, allowing at once a detailed study of all evolutionary stages of the universe since it was about 2 billion years old . These investigations have started and are expected to provide information about the evolution of galaxies in unprecedented detail. They will offer insights into the history of star formation and how the internal structure of galaxies changes with time and, not least, throw light on how these two evolutionary aspects are interconnected. GALAXIES IN THE WFI IMAGE ESO PR Photo 02b/03 ESO PR Photo 02b/03 [Preview - JPEG: 488 x 400 pix - 112k] [Normal - JPEG: 896 x 800 pix - 1.0M] [Full-Res - JPEG: 2591 x 2313 pix - 8.6M] Caption : PR Photo 02b/03 contains a collection of twelve subfields from the full WFI Chandra Deep Field South (WFI CDF-S), centred on (pairs or groups of) galaxies. Each of the subfields measures 2.5 x 2.5 arcmin 2 (635 x 658 pix 2 ; 1 pixel = 0.238 arcsec). North is up and East is left. Technical information is available below. The WFI CDF-S colour image - of which the full field is shown in PR Photo 02a/03 - was constructed from all available observations in the optical B- ,V- and R-bands obtained under good conditions with the Wide Field Imager (WFI) on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile), and now stored in the ESO Science Data Archive. It is the "deepest" image ever taken with this instrument. It covers a sky field measuring 36 x 34 arcmin 2 , i.e., an area somewhat larger than that of the full moon. The observations were collected during a period of nearly four years, beginning in January 1999 when the WFI instrument was first installed (cf. ESO PR 02/99 ) and ending in October 2002. Altogether, nearly 50 hours of exposure were collected in the three filters combined here, cf. the technical information below. Although it is possible to identify more than 100,000 galaxies in the image - some of which are shown in PR Photo 02b/03 - it is still remarkably "empty" by astronomical standards. Even the brightest stars in the field (of visual magnitude 9) can hardly be seen by human observers with binoculars. In fact, the area density of bright, nearby galaxies is only half of what it is in "normal" sky fields. Comparatively empty fields like this one provide an unsually clear view towards the distant regions in the universe and thus open a window towards the earliest cosmic times . Research projects in the Chandra Deep Field South ESO PR Photo 02c/03 ESO PR Photo 02c/03 [Preview - JPEG: 400 x 513 pix - 112k] [Normal - JPEG: 800 x 1026 pix - 1.2M] [Full-Res - JPEG: 1717 x 2201 pix - 5.5M] ESO PR Photo 02d/03 ESO PR Photo 02d/03 [Preview - JPEG: 400 x 469 pix - 112k] [Normal - JPEG: 800 x 937 pix - 1.0M] [Full-Res - JPEG: 2545 x 2980 pix - 10.7M] Caption : PR Photo 02c-d/03 shows two sky fields within the WFI image of CDF-S, reproduced at full (pixel) size to illustrate the exceptional information richness of these data. The subfields measure 6.8 x 7.8 arcmin 2 (1717 x 1975 pixels) and 10.1 x 10.5 arcmin 2 (2545 x 2635 pixels), respectively. North is up and East is left. Technical information is available below. Astronomers from different teams and disciplines have been quick to join forces in a world-wide co-ordinated effort around the Chandra Deep Field South. Observations of this area are now being performed by some of the most powerful astronomical facilities and instruments. They include space-based X-ray and infrared observations by the ESA XMM-Newton , the NASA CHANDRA , Hubble Space Telescope (HST) and soon SIRTF (scheduled for launch in a few months), as well as imaging and spectroscopical observations in the infrared and optical part of the spectrum by telescopes at the ground-based observatories of ESO (La Silla and Paranal) and NOAO (Kitt Peak and Tololo). A huge database is currently being created that will help to analyse the evolution of galaxies in all currently feasible respects. All participating teams have agreed to make their data on this field publicly available, thus providing the world-wide astronomical community with a unique opportunity to perform competitive research, joining forces within this vast scientific project. Concerted observations The optical true-colour WFI image presented here forms an important part of this broad, concerted approach. It combines observations of three scientific teams that have engaged in complementary scientific projects, thereby capitalizing on this very powerful combination of their individual observations. The following teams are involved in this work: * COMBO-17 (Classifying Objects by Medium-Band Observations in 17 filters) : an international collaboration led by Christian Wolf and other scientists at the Max-Planck-Institut für Astronomie (MPIA, Heidelberg, Germany). This team used 51 hours of WFI observing time to obtain images through five broad-band and twelve medium-band optical filters in the visual spectral region in order to measure the distances (by means of "photometric redshifts") and star-formation rates of about 10,000 galaxies, thereby also revealing their evolutionary status. * EIS (ESO Imaging Survey) : a team of visiting astronomers from the ESO community and beyond, led by Luiz da Costa (ESO). They observed the CDF-S for 44 hours in six optical bands with the WFI camera on the MPG/ESO 2.2-m telescope and 28 hours in two near-infrared bands with the SOFI instrument at the ESO 3.5-m New Technology Telescope (NTT) , both at La Silla. These observations form part of the Deep Public Imaging Survey that covers a total sky area of 3 square degrees. * GOODS (The Great Observatories Origins Deep Survey) : another international team (on the ESO side, led by Catherine Cesarsky ) that focusses on the coordination of deep space- and ground-based observations on a smaller, central area of the CDF-S in order to image the galaxies in many differerent spectral wavebands, from X-rays to radio. GOODS has contributed with 40 hours of WFI time for observations in three broad-band filters that were designed for the selection of targets to be spectroscopically observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory (Chile), for which over 200 hours of observations are planned. About 10,000 galaxies will be spectroscopically observed in order to determine their redshift (distance), star formation rate, etc. Another important contribution to this large research undertaking will come from the GEMS project. This is a "HST treasury programme" (with Hans-Walter Rix from MPIA as Principal Investigator) which observes the 10,000 galaxies identified in COMBO-17 - and eventually the entire WFI-field with HST - to show the evolution of their shapes with time. Great questions With the combination of data from many wavelength ranges now at hand, the astronomers are embarking upon studies of the many different processes in the universe. They expect to shed more light on several important cosmological questions, such as: * How and when was the first generation of stars born? * When exactly was the neutral hydrogen in the universe ionized the first time by powerful radiation emitted from the first stars and active galactic nuclei? * How did galaxies and groups of galaxies evolve during the past 13 billion years? * What is the true nature of those elusive objects that are only seen at the infrared and submillimetre wavelengths (cf. ESO PR 23/02 )? * Which fraction of galaxies had an "active" nucleus (probably with a black hole at the centre) in their past, and how long did this phase last? Moreover, since these extensive optical observations were obtained in the course of a dozen observing periods during several years, it is also possible to perform studies of certain variable phenomena: * How many variable sources are seen and what are their types and properties? * How many supernovae are detected per time interval, i.e. what is the supernovae frequency at different cosmic epochs? * How do those processes depend on each other? This is just a short and very incomplete list of questions astronomers world-wide will address using all the complementary observations. No doubt that the coming studies of the Chandra Deep Field South - with this and other data - will be most exciting and instructive! Other wide-field images Other wide-field images from the WFI have been published in various ESO press releases during the past four years - they are also available at the WFI Photo Gallery . A collection of full-resolution files (TIFF-format) is available on a WFI CD-ROM . Technical Information The very extensive data reduction and colour image processing needed to produce these images were performed by Mischa Schirmer and Thomas Erben at the "Wide Field Expertise Center" of the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) in Germany. It was done by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. This pipeline is mainly based on publicly available software modules and algorithms ( EIS , FLIPS , LDAC , Terapix , Wifix ). The image was constructed from about 150 exposures in each of the following wavebands: B-band (centred at wavelength 456 nm; here rendered as blue, 15.8 hours total exposure time), V-band (540 nm; green, 15.6 hours) and R-band (652 nm; red, 17.8 hours). Only images taken under sufficiently good observing conditions (defined as seeing less than 1.1 arcsec) were included. In total, 450 images were assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). More than 2 Terabyte (TB) of temporary files were produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation and a 1.8 GHz dual processor Linux PC. The final colour image was assembled in Adobe Photoshop. The observations were performed by ESO (GOODS, EIS) and the COMBO-17 collaboration in the period 1/1999-10/2002.

  20. VizieR Online Data Catalog: GOODS-S CANDELS multiwavelength catalog (Guo+, 2013)

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Ferguson, H. C.; Giavalisco, M.; Barro, G.; Willner, S. P.; Ashby, M. L. N.; Dahlen, T.; Donley, J. L.; Faber, S. M.; Fontana, A.; Galametz, A.; Grazian, A.; Huang, K.-H.; Kocevski, D. D.; Koekemoer, A. M.; Koo, D. C.; McGrath, E. J.; Peth, M.; Salvato, M.; Wuyts, S.; Castellano, M.; Cooray, A. R.; Dickinson, M. E.; Dunlop, J. S.; Fazio, G. G.; Gardner, J. P.; Gawiser, E.; Grogin, N. A.; Hathi, N. P.; Hsu, L.-T.; Lee, K.-S.; Lucas, R. A.; Mobasher, B.; Nandra, K.; Newman, J. A.; van der Wel, A.

    2014-04-01

    The Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS; Grogin et al. 2011ApJS..197...35G; Koekemoer et al. 2011ApJS..197...36K) is designed to document galaxy formation and evolution over the redshift range of z=1.5-8. The core of CANDELS is to use the revolutionary near-infrared HST/WFC3 camera, installed on HST in 2009 May, to obtain deep imaging of faint and faraway objects. The GOODS-S field, centered at RAJ2000=03:32:30 and DEJ2000=-27:48:20 and located within the Chandra Deep Field South (CDFS; Giacconi et al. 2002, Cat. J/ApJS/139/369), is a sky region of about 170arcmin2 which has been targeted for some of the deepest observations ever taken by NASA's Great Observatories, HST, Spitzer, and Chandra as well as by other world-class telescopes. The field has been (among others) imaged in the optical wavelength with HST/ACS in F435W, F606W, F775W, and F850LP bands as part of the HST Treasury Program: the Great Observatories Origins Deep Survey (GOODS; Giavalisco et al. 2004, Cat. II/261); in the mid-IR (3.6-24um) wavelength with Spitzer as part of the GOODS Spitzer Legacy Program (PI: M. Dickinson). The CDF-S/GOODS field was observed by the MOSAIC II imager on the CTIO 4m Blanco telescope to obtain deep U-band observations in 2001 September. Another U-band survey in GOODS-S was carried out using the VIMOS instrument mounted at the Melipal Unit Telescope of the VLT at ESO's Cerro Paranal Observatory, Chile. This large program of ESO (168.A-0485; PI: C. Cesarsky) was obtained in service mode observations in UT3 between 2004 August and fall 2006. In the ground-based NIR, imaging observations of the CDFS were carried out in J, H, Ks bands using the ISAAC instrument mounted at the Antu Unit Telescope of the VLT. Data were obtained as part of the ESO Large Programme 168.A-0485 (PI: C. Cesarsky) as well as ESO Programmes 64.O-0643, 66.A-0572, and 68.A-0544 (PI: E. Giallongo) with a total allocation time of ~500 hr from 1999 October to 2007 January. The CANDELS/GOODS-S field was also observed in the NIR as part of the ongoing HAWK-I UDS and GOODS-S survey (HUGS; VLT large program ID 186.A-0898; PI: A. Fontana; A. Fontana et al., in preparation) using the High Acuity Wide field K-band Imager (HAWK-I) on VLT. (1 data file).

  1. The assessment of low probability containment failure modes using dynamic PRA

    NASA Astrophysics Data System (ADS)

    Brunett, Acacia Joann

    Although low probability containment failure modes in nuclear power plants may lead to large releases of radioactive material, these modes are typically crudely modeled in system level codes and have large associated uncertainties. Conventional risk assessment techniques (i.e. the fault-tree/event-tree methodology) are capable of accounting for these failure modes to some degree, however, they require the analyst to pre-specify the ordering of events, which can vary within the range of uncertainty of the phenomena. More recently, dynamic probabilistic risk assessment (DPRA) techniques have been developed which remove the dependency on the analyst. Through DPRA, it is now possible to perform a mechanistic and consistent analysis of low probability phenomena, with the timing of the possible events determined by the computational model simulating the reactor behavior. The purpose of this work is to utilize DPRA tools to assess low probability containment failure modes and the driving mechanisms. Particular focus is given to the risk-dominant containment failure modes considered in NUREG-1150, which has long been the standard for PRA techniques. More specifically, this work focuses on the low probability phenomena occurring during a station blackout (SBO) with late power recovery in the Zion Nuclear Power Plant, a Westinghouse pressurized water reactor (PWR). Subsequent to the major risk study performed in NUREG-1150, significant experimentation and modeling regarding the mechanisms driving containment failure modes have been performed. In light of this improved understanding, NUREG-1150 containment failure modes are reviewed in this work using the current state of knowledge. For some unresolved mechanisms, such as containment loading from high pressure melt ejection and combustion events, additional analyses are performed using the accident simulation tool MELCOR to explore the bounding containment loads for realistic scenarios. A dynamic treatment in the characterization of combustible gas ignition is also presented in this work. In most risk studies, combustion is treated simplistically in that it is assumed an ignition occurs if the gas mixture achieves a concentration favorable for ignition under the premise that an adequate ignition source is available. However, the criteria affecting ignition (such as the magnitude, location and frequency of the ignition sources) are complicated. This work demonstrates a technique for characterizing the properties of an ignition source to determine a probability of ignition. The ignition model developed in this work and implemented within a dynamic framework is utilized to analyze the implications and risk significance of late combustion events. This work also explores the feasibility of using dynamic event trees (DETs) with a deterministic sampling approach to analyze low probability phenomena. The flexibility of this approach is demonstrated through the rediscretization of containment fragility curves used in construction of the DET to show convergence to a true solution. Such a rediscretization also reduces the computational burden introduced through extremely fine fragility curve discretization by subsequent refinement of fragility curve regions of interest. Another advantage of the approach is the ability to perform sensitivity studies on the cumulative distribution functions (CDFs) used to determine branching probabilities without the need for rerunning the simulation code. Through review of the NUREG-1150 containment failure modes using the current state of knowledge, it is found that some failure modes, such as Alpha and rocket, can be excluded from further studies; other failure modes, such as failure to isolate, bypass, high pressure melt ejection (HPME), combustion-induced failure and overpressurization are still concerns to varying degrees. As part of this analysis, scoping studies performed in MELCOR show that HPME and the resulting direct containment heating (DCH) do not impose a significant threat to containment integrity. Additional scoping studies regarding the effect of recovery actions on in-vessel hydrogen generation show that reflooding a partially degraded core do not significantly affect hydrogen generation in-vessel, and the NUREG-1150 assumption that insufficient hydrogen is generated in-vessel to produce an energetic deflagration is confirmed. The DET analyses performed in this work show that very late power recovery produces the potential for very energetic combustion events which are capable of failing containment with a non-negligible probability, and that containment cooling systems have a significant impact on core concrete attack, and therefore combustible gas generation ex-vessel. Ultimately, the overall risk of combustion-induced containment failure is low, but its conditional likelihood can have a significant effect on accident mitigation strategies. It is also shown in this work that DETs are particularly well suited to examine low probability events because of their ability to rediscretize CDFs and observe solution convergence.

  2. The Rest-Frame Optical Luminosity Functions of Galaxies at 2<=z<=3.5

    NASA Astrophysics Data System (ADS)

    Marchesini, D.; van Dokkum, P.; Quadri, R.; Rudnick, G.; Franx, M.; Lira, P.; Wuyts, S.; Gawiser, E.; Christlein, D.; Toft, S.

    2007-02-01

    We present the rest-frame optical (B, V, and R band) luminosity functions (LFs) of galaxies at 2<=z<=3.5, measured from a K-selected sample constructed from the deep NIR MUSYC, the ultradeep FIRES, and the GOODS-CDFS. This sample is unique for its combination of area and range of luminosities. The faint-end slopes of the LFs at z>2 are consistent with those at z~0. The characteristic magnitudes are significantly brighter than the local values (e.g., ~1.2 mag in the R band), while the measured values for Φ* are typically ~5 times smaller. The B-band luminosity density at z~2.3 is similar to the local value, and in the R band it is ~2 times smaller than the local value. We present the LF of distant red galaxies (DRGs), which we compare to that of non-DRGs. While DRGs and non-DRGs are characterized by similar LFs at the bright end, the faint-end slope of the non-DRG LF is much steeper than that of DRGs. The contribution of DRGs to the global densities down to the faintest probed luminosities is 14%-25% in number and 22%-33% in luminosity. From the derived rest-frame U-V colors and stellar population synthesis models, we estimate the mass-to-light ratios (M/L) of the different subsamples. The M/L ratios of DRGs are ~5 times higher (in the R and V bands) than those of non-DRGs. The global stellar mass density at 2<=z<=3.5 appears to be dominated by DRGs, whose contribution is of order ~60%-80% of the global value. Qualitatively similar results are obtained when the population is split by rest-frame U-V color instead of observed J-K color. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. Also based on observations collected at the European Southern Observatories on Paranal, Chile as part of the ESO program 164.O-0612.

  3. The Compton-thick AGN fraction from the deepest X-ray spectroscopy in the CDF-S

    NASA Astrophysics Data System (ADS)

    Corral, A.; Georgantopoulos, I.; Akylas, A.; Ranalli, P.

    2017-10-01

    Highly obscured AGN, especially Compton-thick (CT) AGN, likely play a key role in the galaxy-AGN co-evolution scenario. They would comprise the early stages of AGN activity, preceding the AGN-feedback/star-formation quenching phase, during which most of both the SMBH and galaxy growth take place. However, the actual CT fraction among the AGN population is still largely unconstrained. The most reliable way of confirming the obscured nature of an AGN by X-ray spectroscopy, but very deep observations are needed to extend local analyses to larger distances. We will present the X-ray spectral analysis of the deepest X-ray data obtained to date, the almost 7Ms observation of the Chandra Deep Field South. The unprecedented depth of this survey allow us to carry out reliable spectral analyses down to a flux limit of 10^{-16} erg cm^{-2} s^{-1} in the hard 2-8 keV band. Besides the new deeper X-ray data, our approach also includes the implementation of Bayesian inference in the determination of the CT fraction. Our results favor X-ray background synthesis models which postulate a moderate fraction (25%) of CT objects among the obscured AGN population.

  4. Feature-Based Statistical Analysis of Combustion Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, J; Krishnamoorthy, V; Liu, S

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing andmore » reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion science; however, it is applicable to many other science domains.« less

  5. The Nature of the Unresolved Extragalactic Cosmic Soft X-Ray Background

    NASA Technical Reports Server (NTRS)

    Cappelluti, N.; Ranalli, P.; Roncarelli, M.; Arevalo, P.; Zamorani, G.; Comastri, A.; Gilli, R.; Rovilos, E.; Vignali, C.; Allevato, V.; hide

    2013-01-01

    In this paper we investigate the power spectrum of the unresolved 0.5-2 keV cosmic X-ray background (CXB) with deep Chandra 4-Msec (Ms) observations in the Chandra Deep Field South (CDFS). We measured a signal that, on scales >30 arcsec, is significantly higher than the shot noise and is increasing with angular scale. We interpreted this signal as the joint contribution of clustered undetected sources like active galactic nuclei (AGN), galaxies and the intergalactic medium (IGM). The power of unresolved cosmic source fluctuations accounts for approximately 12 per cent of the 0.5-2 keV extragalactic CXB. Overall, our modelling predicts that approximately 20 per cent of the unresolved CXB flux is produced by low-luminosity AGN, approximately 25 per cent by galaxies and approximately 55 per cent by the IGM. We do not find any direct evidence of the so-called 'warm hot intergalactic medium' (i.e. matter with 10(exp 5) less than T less than 10(exp 7) K and density contrast delta less than 1000), but we estimated that it could produce about 1/7 of the unresolved CXB. We placed an upper limit on the space density of postulated X-ray-emitting early black holes at z greater than 7.5 and compared it with supermassive black hole evolution models.

  6. Linear dispersion properties of ring velocity distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandas, Marek, E-mail: marek.vandas@asu.cas.cz; Hellinger, Petr; Institute of Atmospheric Physics, AS CR, Bocni II/1401, CZ-14100 Prague

    2015-06-15

    Linear properties of ring velocity distribution functions are investigated. The dispersion tensor in a form similar to the case of a Maxwellian distribution function, but for a general distribution function separable in velocities, is presented. Analytical forms of the dispersion tensor are derived for two cases of ring velocity distribution functions: one obtained from physical arguments and one for the usual, ad hoc ring distribution. The analytical expressions involve generalized hypergeometric, Kampé de Fériet functions of two arguments. For a set of plasma parameters, the two ring distribution functions are compared. At the parallel propagation with respect to the ambientmore » magnetic field, the two ring distributions give the same results identical to the corresponding bi-Maxwellian distribution. At oblique propagation, the two ring distributions give similar results only for strong instabilities, whereas for weak growth rates their predictions are significantly different; the two ring distributions have different marginal stability conditions.« less

  7. Renormalizability of quasiparton distribution functions

    DOE PAGES

    Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei; ...

    2017-11-21

    Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less

  8. Renormalizability of quasiparton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei

    Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less

  9. Potential energy distribution function and its application to the problem of evaporation

    NASA Astrophysics Data System (ADS)

    Gerasimov, D. N.; Yurin, E. I.

    2017-10-01

    Distribution function on potential energy in a strong correlated system can be calculated analytically. In an equilibrium system (for instance, in the bulk of the liquid) this distribution function depends only on temperature and mean potential energy, which can be found through the specific heat of vaporization. At the surface of the liquid this distribution function differs significantly, but its shape still satisfies analytical correlation. Distribution function on potential energy nearby the evaporation surface can be used instead of the work function of the atom of the liquid.

  10. Unifying distribution functions: some lesser known distributions.

    PubMed

    Moya-Cessa, J R; Moya-Cessa, H; Berriel-Valdos, L R; Aguilar-Loreto, O; Barberis-Blostein, P

    2008-08-01

    We show that there is a way to unify distribution functions that describe simultaneously a classical signal in space and (spatial) frequency and position and momentum for a quantum system. Probably the most well known of them is the Wigner distribution function. We show how to unify functions of the Cohen class, Rihaczek's complex energy function, and Husimi and Glauber-Sudarshan distribution functions. We do this by showing how they may be obtained from ordered forms of creation and annihilation operators and by obtaining them in terms of expectation values in different eigenbases.

  11. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  12. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    PubMed

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  13. US EPA's National Dioxin Air Monitoring Network: Analytical ...

    EPA Pesticide Factsheets

    The U.S. EPA has established a National Dioxin Air Monitoring Network (NDAMN) to determine the temporal and geographical variability of atmospheric chlorinated dibenzo-p-dioxins (CDDs), furans (CDFs), and coplanar polychlorinated biphenyls (PCBs) at rural and non-impacted locations throughout the United States. Currently operating at 32 sampling stations, NDAMN has three primary purposes: (1) to determine the atmospheric levels and occurrences of dioxin-like compounds in rural and agricultural areas where livestock, poultry, and animal feed crops are grown; (2) to provide measurements of atmospheric levels in different geographic regions of the U.S.; and (3) to provide information regarding the long-range transport of dioxin-like compounds in air over the U.S. Designed in 1997, NDAMN has been implemented in phases, with the first phase consisting of 9 monitoring stations and is achieving congener-specific detection lmits of 0.1 fg/m3 for 2,3,7,8-TCDD and 10 fg/m3 for OCDD. With respect to coplanar PCBs, the detection limits are generally higher due to the presence of background levels in the air during the preparation and processing of the samples. Achieving these extremely low levels of detection present a host of analytical issues. Among these issues are the methods used to establish ultra-trace detection limits, measures to ensure against and monitor for breakthrough of native analytes when sampling large volumes of air, and procedures for handling and e

  14. Visualizing the kinetic power stroke that drives proton-coupled Zn(II) transport

    PubMed Central

    Gupta, Sayan; Chai, Jin; Cheng, Jie; D'Mello, Rhijuta; Chance, Mark R.; Fu, Dax

    2014-01-01

    The proton gradient is a principal energy source for respiration-dependent active transport, but the structural mechanisms of proton-coupled transport processes are poorly understood. YiiP is a proton-coupled zinc transporter found in the cytoplasmic membrane of E. coli, and the transport-site of YiiP receives protons from water molecules that gain access to its hydrophobic environment and transduces the energy of an inward proton gradient to drive Zn(II) efflux1,2. This membrane protein is a well characterized member3-7 of the protein family of cation diffusion facilitators (CDFs) that occurs at all phylogenetic levels8-10. X-ray mediated hydroxyl radical labeling of YiiP and mass spectrometric analysis showed that Zn(II) binding triggered a highly localized, all-or-none change of water accessibility to the transport-site and an adjacent hydrophobic gate. Millisecond time-resolved dynamics revealed a concerted and reciprocal pattern of accessibility changes along a transmembrane helix, suggesting a rigid-body helical reorientation linked to Zn(II) binding that triggers the closing of the hydrophobic gate. The gated water access to the transport-site enables a stationary proton gradient to facilitate the conversion of zinc binding energy to the kinetic power stroke of a vectorial zinc transport. The kinetic details provide energetic insights into a proton-coupled active transport reaction. PMID:25043033

  15. Journal Article: the National Dioxin Air Monitoring Network ...

    EPA Pesticide Factsheets

    The U.S. EPA has established a National Dioxin Air Monitoring Network (NDAMN) to determine the temporal and geographical variability of atmospheric CDDs, CDFs and coplanar PCBs at rural and nonimpacted locations throughout the United States. Currently operating at 32 sampling stations, NDAMN has three primary purposes: (1) to determine the atmospheric levels and occurrences of dioxin-like compounds in rural and agricultural areas where livestock, poultry and animal feed crops are grown; (2) to provide measurements of atmospheric levels of dioxin-like compounds in different geographic regions of the U.S.; and (3) to provide information regarding the long-range transport of dioxin-like compounds in air over the U.S. Designed in 1997, NDAMN has been implemented in phases, with the first phase consisting of 9 monitoring stations. Previously EPA has reported on the preliminary results of monitoring at 9 rural locations from June1998 through June 19991. The one-year measurement at the 9 stations indicated an annual mean TEQDF–WHO98 air concentration of 12 fg m-3. Since this reporting, NDAMN has been extended to include additional stations. The following is intended to be an update to this national monitoring effort. We are reporting the air monitoring results of 22 NDAMN stations operational over 9 sampling moments from June 1998 to December 1999. Fifteen stations are in rural areas, and 6 are located in National Parks. One station is located in suburban Wa

  16. Collisionless distribution function for the relativistic force-free Harris sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stark, C. R.; Neukirch, T.

    A self-consistent collisionless distribution function for the relativistic analogue of the force-free Harris sheet is presented. This distribution function is the relativistic generalization of the distribution function for the non-relativistic collisionless force-free Harris sheet recently found by Harrison and Neukirch [Phys. Rev. Lett. 102, 135003 (2009)], as it has the same dependence on the particle energy and canonical momenta. We present a detailed calculation which shows that the proposed distribution function generates the required current density profile (and thus magnetic field profile) in a frame of reference in which the electric potential vanishes identically. The connection between the parameters ofmore » the distribution function and the macroscopic parameters such as the current sheet thickness is discussed.« less

  17. Double Wigner distribution function of a first-order optical system with a hard-edge aperture.

    PubMed

    Pan, Weiqing

    2008-01-01

    The effect of an apertured optical system on Wigner distribution can be expressed as a superposition integral of the input Wigner distribution function and the double Wigner distribution function of the apertured optical system. By introducing a hard aperture function into a finite sum of complex Gaussian functions, the double Wigner distribution functions of a first-order optical system with a hard aperture outside and inside it are derived. As an example of application, the analytical expressions of the Wigner distribution for a Gaussian beam passing through a spatial filtering optical system with an internal hard aperture are obtained. The analytical results are also compared with the numerical integral results, and they show that the analytical results are proper and ascendant.

  18. Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.

    PubMed

    Zalvidea, D; Sicre, E E

    1998-06-10

    A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.

  19. Direct connection between the different QCD orders for parton distribution and fragmentation functions

    NASA Astrophysics Data System (ADS)

    Shevchenko, O. Yu.

    2013-06-01

    The formulas directly connecting parton distribution functions and fragmentation functions at the next-to-leading-order QCD with the same quantities at the leading order are derived. These formulas are universal, i.e., have the same form for all kinds of parton distribution functions and fragmentation functions, differing only in the respective splitting functions entering there.

  20. Studies of the Intrinsic Complexities of Magnetotail Ion Distributions: Theory and Observations

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, Maha

    1998-01-01

    This year we have studied the relationship between the structure seen in measured distribution functions and the detailed magnetospheric configuration. Results from our recent studies using time-dependent large-scale kinetic (LSK) calculations are used to infer the sources of the ions in the velocity distribution functions measured by a single spacecraft (Geotail). Our results strongly indicate that the different ion sources and acceleration mechanisms producing a measured distribution function can explain this structure. Moreover, individual structures within distribution functions were traced back to single sources. We also confirmed the fractal nature of ion distributions.

  1. Beyond Zipf's Law: The Lavalette Rank Function and Its Properties.

    PubMed

    Fontanelli, Oscar; Miramontes, Pedro; Yang, Yaning; Cocho, Germinal; Li, Wentian

    Although Zipf's law is widespread in natural and social data, one often encounters situations where one or both ends of the ranked data deviate from the power-law function. Previously we proposed the Beta rank function to improve the fitting of data which does not follow a perfect Zipf's law. Here we show that when the two parameters in the Beta rank function have the same value, the Lavalette rank function, the probability density function can be derived analytically. We also show both computationally and analytically that Lavalette distribution is approximately equal, though not identical, to the lognormal distribution. We illustrate the utility of Lavalette rank function in several datasets. We also address three analysis issues on the statistical testing of Lavalette fitting function, comparison between Zipf's law and lognormal distribution through Lavalette function, and comparison between lognormal distribution and Lavalette distribution.

  2. Energy distribution functions of kilovolt ions in a modified Penning discharge

    NASA Technical Reports Server (NTRS)

    Roth, J. R.

    1972-01-01

    The distribution function of ion energy parallel to the magnetic field of a Penning discharge was measured with a retarding potential energy analyzer. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field were made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and their kinetic temperatures are equal within experimental error. This suggests that turbulent processes previously observed Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution. The kinetic temperatures are on the order of kilovolts, and the tails of the ion energy distribution functions are Maxwellian up to a factor of 7 e-folds in energy. When the distributions depart from Maxwellian, they are enhanced above the Maxwellian tail. Above densities of about 10 to the 10th power particles/cc, this enhancement appears to be the result of a second, higher temperature Maxwellian distribution. At these high particle energies, only the ions perpendicular to the magnetic field lines were investigated.

  3. An estimation of distribution method for infrared target detection based on Copulas

    NASA Astrophysics Data System (ADS)

    Wang, Shuo; Zhang, Yiqun

    2015-10-01

    Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.

  4. Wigner Distribution Functions as a Tool for Studying Gas Phase Alkali Metal Plus Noble Gas Collisions

    DTIC Science & Technology

    2014-03-27

    ii List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii I...t, E) Wigner Distribution Function ii List of Acronyms Acronym Definition WDF Wigner Distribution Function PES Potential Energy Surface DPAL Diode

  5. Modelling altered revenue function based on varying power consumption distribution and electricity tariff charge using data analytics framework

    NASA Astrophysics Data System (ADS)

    Zainudin, W. N. R. A.; Ramli, N. A.

    2017-09-01

    In 2010, Energy Commission (EC) had introduced Incentive Based Regulation (IBR) to ensure sustainable Malaysian Electricity Supply Industry (MESI), promotes transparent and fair returns, encourage maximum efficiency and maintains policy driven end user tariff. To cater such revolutionary transformation, a sophisticated system to generate policy driven electricity tariff structure is in great need. Hence, this study presents a data analytics framework that generates altered revenue function based on varying power consumption distribution and tariff charge function. For the purpose of this study, the power consumption distribution is being proxy using proportion of household consumption and electricity consumed in KwH and the tariff charge function is being proxy using three-tiered increasing block tariff (IBT). The altered revenue function is useful to give an indication on whether any changes in the power consumption distribution and tariff charges will give positive or negative impact to the economy. The methodology used for this framework begins by defining the revenue to be a function of power consumption distribution and tariff charge function. Then, the proportion of household consumption and tariff charge function is derived within certain interval of electricity power. Any changes in those proportion are conjectured to contribute towards changes in revenue function. Thus, these changes can potentially give an indication on whether the changes in power consumption distribution and tariff charge function are giving positive or negative impact on TNB revenue. Based on the finding of this study, major changes on tariff charge function seems to affect altered revenue function more than power consumption distribution. However, the paper concludes that power consumption distribution and tariff charge function can influence TNB revenue to some great extent.

  6. Energy distribution functions of kilovolt ions in a modified Penning discharge.

    NASA Technical Reports Server (NTRS)

    Roth, J. R.

    1973-01-01

    The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution.

  7. Energy distribution functions of kilovolt ions in a modified Penning discharge.

    NASA Technical Reports Server (NTRS)

    Roth, J. R.

    1972-01-01

    The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution.

  8. A new family of distribution functions for spherical galaxies

    NASA Astrophysics Data System (ADS)

    Gerhard, Ortwin E.

    1991-06-01

    The present study describes a new family of anisotropic distribution functions for stellar systems designed to keep control of the orbit distribution at fixed energy. These are quasi-separable functions of energy and angular momentum, and they are specified in terms of a circularity function h(x) which fixes the distribution of orbits on the potential's energy surfaces outside some anisotropy radius. Detailed results are presented for a particular set of radially anisotropic circularity functions h-alpha(x). In the scale-free logarithmic potential, exact analytic solutions are shown to exist for all scale-free circularity functions. Intrinsic and projected velocity dispersions are calculated and the expected properties are presented in extensive tables and graphs. Several applications of the quasi-separable distribution functions are discussed. They include the effects of anisotropy or a dark halo on line-broadening functions, the radial orbit instability in anisotropic spherical systems, and violent relaxation in spherical collapse.

  9. On Interpreting and Extracting Information from the Cumulative Distribution Function Curve: A New Perspective with Applications

    ERIC Educational Resources Information Center

    Balasooriya, Uditha; Li, Jackie; Low, Chan Kee

    2012-01-01

    For any density function (or probability function), there always corresponds a "cumulative distribution function" (cdf). It is a well-known mathematical fact that the cdf is more general than the density function, in the sense that for a given distribution the former may exist without the existence of the latter. Nevertheless, while the…

  10. dftools: Distribution function fitting

    NASA Astrophysics Data System (ADS)

    Obreschkow, Danail

    2018-05-01

    dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

  11. The correlation function for density perturbations in an expanding universe. III The three-point and predictions of the four-point and higher order correlation functions

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1978-01-01

    Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.

  12. Expansion moments for the local field distribution that involve the three-particle distribution function

    NASA Astrophysics Data System (ADS)

    Attard, Phil

    The second moment of the Lennard-Jones local field distribution in a hard-sphere fluid is evaluated using the PY3 three-particle distribution function. An approximation due to Lado that avoids the explicit calculation of the latter is shown to be accurate. Partial results are also given for certain cavity-hard-sphere radial distribution functions that occur in a closest particle expansion for the local field.

  13. A decentralized mechanism for improving the functional robustness of distribution networks.

    PubMed

    Shi, Benyun; Liu, Jiming

    2012-10-01

    Most real-world distribution systems can be modeled as distribution networks, where a commodity can flow from source nodes to sink nodes through junction nodes. One of the fundamental characteristics of distribution networks is the functional robustness, which reflects the ability of maintaining its function in the face of internal or external disruptions. In view of the fact that most distribution networks do not have any centralized control mechanisms, we consider the problem of how to improve the functional robustness in a decentralized way. To achieve this goal, we study two important problems: 1) how to formally measure the functional robustness, and 2) how to improve the functional robustness of a network based on the local interaction of its nodes. First, we derive a utility function in terms of network entropy to characterize the functional robustness of a distribution network. Second, we propose a decentralized network pricing mechanism, where each node need only communicate with its distribution neighbors by sending a "price" signal to its upstream neighbors and receiving "price" signals from its downstream neighbors. By doing so, each node can determine its outflows by maximizing its own payoff function. Our mathematical analysis shows that the decentralized pricing mechanism can produce results equivalent to those of an ideal centralized maximization with complete information. Finally, to demonstrate the properties of our mechanism, we carry out a case study on the U.S. natural gas distribution network. The results validate the convergence and effectiveness of our mechanism when comparing it with an existing algorithm.

  14. Simulation study of entropy production in the one-dimensional Vlasov system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie

    2016-07-15

    The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.

  15. Energy distribution functions of kilovolt ions parallel and perpendicular to the magnetic field of a modified Penning discharge

    NASA Technical Reports Server (NTRS)

    Roth, R. J.

    1973-01-01

    The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space and cause an isotropic energy distribution. When the distributions depart from Maxwellian, they are enhanced above the Maxwellian tail.

  16. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  17. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  18. Unstable density distribution associated with equatorial plasma bubble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.

    2016-04-15

    In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less

  19. Species, functional groups, and thresholds in ecological resilience

    USGS Publications Warehouse

    Sundstrom, Shana M.; Allen, Craig R.; Barichievy, Chris

    2012-01-01

    The cross-scale resilience model states that ecological resilience is generated in part from the distribution of functions within and across scales in a system. Resilience is a measure of a system's ability to remain organized around a particular set of mutually reinforcing processes and structures, known as a regime. We define scale as the geographic extent over which a process operates and the frequency with which a process occurs. Species can be categorized into functional groups that are a link between ecosystem processes and structures and ecological resilience. We applied the cross-scale resilience model to avian species in a grassland ecosystem. A species’ morphology is shaped in part by its interaction with ecological structure and pattern, so animal body mass reflects the spatial and temporal distribution of resources. We used the log-transformed rank-ordered body masses of breeding birds associated with grasslands to identify aggregations and discontinuities in the distribution of those body masses. We assessed cross-scale resilience on the basis of 3 metrics: overall number of functional groups, number of functional groups within an aggregation, and the redundancy of functional groups across aggregations. We assessed how the loss of threatened species would affect cross-scale resilience by removing threatened species from the data set and recalculating values of the 3 metrics. We also determined whether more function was retained than expected after the loss of threatened species by comparing observed loss with simulated random loss in a Monte Carlo process. The observed distribution of function compared with the random simulated loss of function indicated that more functionality in the observed data set was retained than expected. On the basis of our results, we believe an ecosystem with a full complement of species can sustain considerable species losses without affecting the distribution of functions within and across aggregations, although ecological resilience is reduced. We propose that the mechanisms responsible for shaping discontinuous distributions of body mass and the nonrandom distribution of functions may also shape species losses such that local extinctions will be nonrandom with respect to the retention and distribution of functions and that the distribution of function within and across aggregations will be conserved despite extinctions.

  20. Extractions of polarized and unpolarized parton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez-Delgado, Pedro

    2014-01-01

    An overview of our ongoing extractions of parton distribution functions of the nucleon is given. First JAM results on the determination of spin-dependent parton distribution functions from world data on polarized deep-inelastic scattering are presented first, and followed by a short report on the status of the JR unpolarized parton distributions. Different aspects of PDF analysis are briefly discussed, including effects of the nuclear structure of targets, target-mass corrections and higher twist contributions to the structure functions.

  1. Generalized plasma dispersion function: One-solve-all treatment, visualizations, and application to Landau damping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Hua-Sheng

    2013-09-15

    A unified, fast, and effective approach is developed for numerical calculation of the well-known plasma dispersion function with extensions from Maxwellian distribution to almost arbitrary distribution functions, such as the δ, flat top, triangular, κ or Lorentzian, slowing down, and incomplete Maxwellian distributions. The singularity and analytic continuation problems are also solved generally. Given that the usual conclusion γ∝∂f{sub 0}/∂v is only a rough approximation when discussing the distribution function effects on Landau damping, this approach provides a useful tool for rigorous calculations of the linear wave and instability properties of plasma for general distribution functions. The results are alsomore » verified via a linear initial value simulation approach. Intuitive visualizations of the generalized plasma dispersion function are also provided.« less

  2. Ubiquity of Benford's law and emergence of the reciprocal distribution

    DOE PAGES

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    2016-04-07

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  3. Plasma Dispersion Function for the Kappa Distribution

    NASA Technical Reports Server (NTRS)

    Podesta, John J.

    2004-01-01

    The plasma dispersion function is computed for a homogeneous isotropic plasma in which the particle velocities are distributed according to a Kappa distribution. An ordinary differential equation is derived for the plasma dispersion function and it is shown that the solution can be written in terms of Gauss' hypergeometric function. Using the extensive theory of the hypergeometric function, various mathematical properties of the plasma dispersion function are derived including symmetry relations, series expansions, integral representations, and closed form expressions for integer and half-integer values of K.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  5. Self-Organizing Maps and Parton Distribution Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    K. Holcomb, Simonetta Liuti, D. Z. Perry

    2011-05-01

    We present a new method to extract parton distribution functions from high energy experimental data based on a specific type of neural networks, the Self-Organizing Maps. We illustrate the features of our new procedure that are particularly useful for an anaysis directed at extracting generalized parton distributions from data. We show quantitative results of our initial analysis of the parton distribution functions from inclusive deep inelastic scattering.

  6. Valid approximation of spatially distributed grain size distributions - A priori information encoded to a feedforward network

    NASA Astrophysics Data System (ADS)

    Berthold, T.; Milbradt, P.; Berkhahn, V.

    2018-04-01

    This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.

  7. A Pool of Distant Galaxies

    NASA Astrophysics Data System (ADS)

    2008-11-01

    Anyone who has wondered what it might be like to dive into a pool of millions of distant galaxies of different shapes and colours, will enjoy the latest image released by ESO. Obtained in part with the Very Large Telescope, the image is the deepest ground-based U-band image of the Universe ever obtained. It contains more than 27 million pixels and is the result of 55 hours of observations with the VIMOS instrument. A Sea of Galaxies ESO PR Photo 39/08 A Pool of Distant Galaxies This uniquely beautiful patchwork image, with its myriad of brightly coloured galaxies, shows the Chandra Deep Field South (CDF-S), arguably the most observed and best studied region in the entire sky. The CDF-S is one of the two regions selected as part of the Great Observatories Origins Deep Survey (GOODS), an effort of the worldwide astronomical community that unites the deepest observations from ground- and space-based facilities at all wavelengths from X-ray to radio. Its primary purpose is to provide astronomers with the most sensitive census of the distant Universe to assist in their study of the formation and evolution of galaxies. The new image released by ESO combines data obtained with the VIMOS instrument in the U- and R-bands, as well as data obtained in the B-band with the Wide-Field Imager (WFI) attached to the 2.2 m MPG/ESO telescope at La Silla, in the framework of the GABODS survey. The newly released U-band image - the result of 40 hours of staring at the same region of the sky and just made ready by the GOODS team - is the deepest image ever taken from the ground in this wavelength domain. At these depths, the sky is almost completely covered by galaxies, each one, like our own galaxy, the Milky Way, home of hundreds of billions of stars. Galaxies were detected that are a billion times fainter than the unaided eye can see and over a range of colours not directly observable by the eye. This deep image has been essential to the discovery of a large number of new galaxies that are so far away that they are seen as they were when the Universe was only 2 billion years old. In this sea of galaxies - or island universes as they are sometimes called - only a very few stars belonging to the Milky Way are seen. One of them is so close that it moves very fast on the sky. This "high proper motion star" is visible to the left of the second brightest star in the image. It appears as a funny elongated rainbow because the star moved while the data were being taken in the different filters over several years. Notes Because the Universe looks the same in all directions, the number, types and distribution of galaxies is the same everywhere. Consequently, very deep observations of the Universe can be performed in any direction. A series of fields were selected where no foreground object could affect the deep space observations (such as a bright star in our galaxy, or the dust from our Solar System). These fields have been observed using a number of telescopes and satellites, so as to collect information at all possible wavelengths, and characterise the full spectrum of the objects in the field. The data acquired from these deep fields are normally made public to the whole community of astronomers, constituting the basis for large collaborations. Observations in the U-band, that is, at the boundary between visible light and ultraviolet are challenging: the Earth's atmosphere becomes more and more opaque out towards the ultraviolet, a useful property that protects people's skin, but limiting to ground-based telescopes. At shorter wavelengths, observations can only be done from space, using, for example, the Hubble Space Telescope. On the ground, only the very best sites, such as ESO's Paranal Observatory in the Atacama Desert, can perform useful observations in the U-band. Even with the best atmospheric conditions, instruments are at their limit at these wavelengths: the glass of normal lenses transmits less UV light, and detectors are less sensitive, so only instruments designed for UV observations, such as VIMOS on ESO's Very Large Telescope, can get enough light. The VIMOS U-band image, which was obtained as part of the ESO/GOODS public programme, is based on 40 hours of observations with the VLT. The VIMOS R-band image was obtained co-adding a large number of archival images totaling 15 hours of exposure. The WFI B-band image is part of the GABODS survey.

  8. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  9. Equilibrium Distribution Functions: Another Look.

    ERIC Educational Resources Information Center

    Waite, Boyd A.

    1986-01-01

    Discusses equilibrium distribution functions and provides an alternative "derivation" that allows the student, with the help of a computer, to gain intuitive insight as to the nature of distributions in general and the precise nature of the dominance of the Boltzmann distribution. (JN)

  10. Journal Article: Atmospheric Measurements of CDDs, CDFs ...

    EPA Pesticide Factsheets

    The U.S. EPA established a National Dioxin Air Monitoring Network (NDAMN) to determine background air concentrations of PCDDs, PCDFs, and cp-PCBs in rural and remote areas of the United States. Background is defined as average ambient air concentrations inferred from long-term and multi-year atmospheric measurements at the same locations using identical monitoring and analytical procedures. The rural sites were chosen in order to obtain air concentrations in areas where crops and livestock are grown, and that encompassed a range of geographic locations in terms of latitudinal and longitudinal positions. Remote sites were selected on the basis that they were relatively free of human habitation and >100 km away from human dioxin sources. The locations of sampling sites covered a wide range of climate conditions from tropical sub-humid to sub-Artic climates. The idea behind the sampling configuration was to provide reasonable geographic coverage of the United States limited only by budgetary constraints. Funding was sufficient for the establishment and maintenance of 34 NDAMN stations over a period of 6 years. Results were reported as the toxic equivalent (TEQ) of the mix of PCDDs/PCDFs (TEQ-DF) and the mix of coplanar PCBs (TEQ-PCB). At the studied rural sites the mean annual TEQ-DF for each of the NDAMN sampling years was 10.43, 11.39, 10.40, and 10.47 femtograms per cubic meter (fg/cu. m) for 1999, 2000, 2001, and 2002, respectively.There was no statistical

  11. Journal Article: the National Dioxin Air Monitoring Network ...

    EPA Pesticide Factsheets

    In June, 1998, the U.S. EPA established the National Dioxin Air Monitoring Network (NDAMN). The primary goal of NDAMN is determine the temporal and geographical variability of atmospheric CDDs, CDFs, and coplanar PCBs at rural and nonimpacted locations throughout the United States. Currently operating at 32 sampling stations, NDAMN has three primary purposes: (1) to determine the atmospheric levels and occurrences of dioxin-like compounds in rural and agricultural areas where livestock, poultry and animal feed crops are grown; (2) to provide measurements of atmospheric levels of dioxin-like compounds in different geographic regions of the U.S.; and (3) to provide information regarding the long-range transport of dioxin-like compounds in air over the U.S. At Dioxin 2000, we reported on the preliminary results of monitoring at 9 rural locations from June 1998 through June 1999. By the end of 1999, NDAMN had expanded to 21 sampling stations. Then, at Dioxin 2001, we reported the results of the first 18 months of operation of NDAMN at 15 rural and 6 National Park stations in the United States. The following is intended to be an update to this national monitoring effort. We are reporting the air monitoring results of 17 rural and 8 National Park NDAMN stations operational over 4 sampling moments during calendar year 2000. Two stations located in suburban Washington DC and San Francisco, CA are more urban in character and serve as an indicator of CDD/F and cop

  12. Chord-length and free-path distribution functions for many-body systems

    NASA Astrophysics Data System (ADS)

    Lu, Binglin; Torquato, S.

    1993-04-01

    We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.

  13. A modified weighted function method for parameter estimation of Pearson type three distribution

    NASA Astrophysics Data System (ADS)

    Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo

    2014-04-01

    In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.

  14. Dispersion relations for a general anisotropic distribution function represented as a sum over Legendre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaisultanov, Rashid; Eichler, David

    2011-03-15

    The dielectric tensor is obtained for a general anisotropic distribution function that is represented as a sum over Legendre polynomials. The result is valid over all of k-space. We obtain growth rates for the Weibel instability for some basic examples of distribution functions.

  15. Rocket measurement of auroral partial parallel distribution functions

    NASA Astrophysics Data System (ADS)

    Lin, C.-A.

    1980-01-01

    The auroral partial parallel distribution functions are obtained by using the observed energy spectra of electrons. The experiment package was launched by a Nike-Tomahawk rocket from Poker Flat, Alaska over a bright auroral band and covered an altitude range of up to 180 km. Calculated partial distribution functions are presented with emphasis on their slopes. The implications of the slopes are discussed. It should be pointed out that the slope of the partial parallel distribution function obtained from one energy spectra will be changed by superposing another energy spectra on it.

  16. Origin of generalized entropies and generalized statistical mechanics for superstatistical multifractal systems

    NASA Astrophysics Data System (ADS)

    Gadjiev, Bahruz; Progulova, Tatiana

    2015-01-01

    We consider a multifractal structure as a mixture of fractal substructures and introduce a distribution function f (α), where α is a fractal dimension. Then we can introduce g(p)˜ ∫- ln p μe-yf(y)dy and show that the distribution functions f (α) in the form of f(α) = δ(α-1), f(α) = δ(α-θ) , f(α) = 1/α-1 , f(y)= y α-1 lead to the Boltzmann - Gibbs, Shafee, Tsallis and Anteneodo - Plastino entropies conformably. Here δ(x) is the Dirac delta function. Therefore the Shafee entropy corresponds to a fractal structure, the Tsallis entropy describes a multifractal structure with a homogeneous distribution of fractal substructures and the Anteneodo - Plastino entropy appears in case of a power law distribution f (y). We consider the Fokker - Planck equation for a fractal substructure and determine its stationary solution. To determine the distribution function of a multifractal structure we solve the two-dimensional Fokker - Planck equation and obtain its stationary solution. Then applying the Bayes theorem we obtain a distribution function for the entire system in the form of q-exponential function. We compare the results of the distribution functions obtained due to the superstatistical approach with the ones obtained according to the maximum entropy principle.

  17. Distribution functions of probabilistic automata

    NASA Technical Reports Server (NTRS)

    Vatan, F.

    2001-01-01

    Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.

  18. Quantification of the Water-Energy Nexus in Beijing City Based on Copula Analysis

    NASA Astrophysics Data System (ADS)

    Cai, J.; Cai, Y.

    2017-12-01

    Water resource and energy resource are intimately and highly interwoven, called ``water-energy nexus", which poses challenges for the sustainable management of water resource and energy resource. In this research, the Copula analysis method is first proposed to be applied in "water-energy nexus" field to clarify the internal relationship of water resource and energy resource, which is a favorable tool to explore the relevance among random variables. Beijing City, the capital of China, is chosen as a case study. The marginal distribution functions of water resource and energy resource are analyzed first. Then the Binary Copula function is employed to construct the joint distribution function of "water-energy nexus" to quantify the inherent relationship between water resource and energy resource. The results show that it is more appropriate to apply Lognormal distribution to establish the marginal distribution function of water resource. Meanwhile, Weibull distribution is more feasible to describe the marginal distribution function of energy resource. Furthermore, it is more suitable to adopt the Bivariate Normal Copula function to construct the joint distribution function of "water-energy nexus" in Beijing City. The findings can help to identify and quantify the "water-energy nexus". In addition, our findings can provide reasonable policy recommendations on the sustainable management of water resource and energy resource to promote regional coordinated development.

  19. Structural frequency functions for an impulsive, distributed forcing function

    NASA Technical Reports Server (NTRS)

    Bateman, Vesta I.

    1987-01-01

    The response of a penetrator structure to a spatially distributed mechanical impulse with a magnitude approaching field test force levels (1-2 Mlb) were measured. The frequency response function calculated from the response to this unique forcing function is compared to frequency response functions calculated from response to point forces of about 2000 pounds. The results show that the strain gages installed on the penetrator case respond similiarly to a point, axial force and to a spatially distributed, axial force. This result suggests that the distributed axial force generated in a penetration event may be reconstructed as a point axial force when the penetrator behaves in linear manner.

  20. The joint fit of the BHMF and ERDF for the BAT AGN Sample

    NASA Astrophysics Data System (ADS)

    Weigel, Anna K.; Koss, Michael; Ricci, Claudio; Trakhtenbrot, Benny; Oh, Kyuseok; Schawinski, Kevin; Lamperti, Isabella

    2018-01-01

    A natural product of an AGN survey is the AGN luminosity function. This statistical measure describes the distribution of directly measurable AGN luminosities. Intrinsically, the shape of the luminosity function depends on the distribution of black hole masses and Eddington ratios. To constrain these fundamental AGN properties, the luminosity function thus has to be disentangled into the black hole mass and Eddington ratio distribution function. The BASS survey is unique as it allows such a joint fit for a large number of local AGN, is unbiased in terms of obscuration in the X-rays and provides black hole masses for type-1 and type-2 AGN. The black hole mass function at z ~ 0 represents an essential baseline for simulations and black hole growth models. The normalization of the Eddington ratio distribution function directly constrains the AGN fraction. Together, the BASS AGN luminosity, black hole mass and Eddington ratio distribution functions thus provide a complete picture of the local black hole population.

  1. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  2. Global Change and the Function and Distribution of Wetlands

    USGS Publications Warehouse

    Middleton, Beth A.

    2012-01-01

    The Global Change Ecology and Wetlands book series will highlight the latest research from the world leaders in the field of climate change in wetlands. Global Change and the Function and Distribution of Wetlands highlights information of importance to wetland ecologists.  The chapters include syntheses of international studies on the effects of drought on function and regeneration in wetlands, sea level rise and the distribution of mangrove swamps, former distributions of swamp species and future lessons from paleoecology, and shifts in atmospheric emissions across geographical regions in wetlands.  Overall, the book will contribute to a better understanding of the potential effects of climate change on world wetland distribution and function.

  3. A test of the cross-scale resilience model: Functional richness in Mediterranean-climate ecosystems

    USGS Publications Warehouse

    Wardwell, D.A.; Allen, Craig R.; Peterson, G.D.; Tyre, A.J.

    2008-01-01

    Ecological resilience has been proposed to be generated, in part, in the discontinuous structure of complex systems. Environmental discontinuities are reflected in discontinuous, aggregated animal body mass distributions. Diversity of functional groups within body mass aggregations (scales) and redundancy of functional groups across body mass aggregations (scales) has been proposed to increase resilience. We evaluate that proposition by analyzing mammalian and avian communities of Mediterranean-climate ecosystems. We first determined that body mass distributions for each animal community were discontinuous. We then calculated the variance in richness of function across aggregations in each community, and compared observed values with distributions created by 1000 simulations using a null of random distribution of function, with the same n, number of discontinuities and number of functional groups as the observed data. Variance in the richness of functional groups across scales was significantly lower in real communities than in simulations in eight of nine sites. The distribution of function across body mass aggregations in the animal communities we analyzed was non-random, and supports the contentions of the cross-scale resilience model. ?? 2007 Elsevier B.V. All rights reserved.

  4. EFFECTS OF LASER RADIATION ON MATTER: Distribution function of microinclusions in polymethylmethacrylate and its evolution under the influence of a series of laser pulses

    NASA Astrophysics Data System (ADS)

    Glauberman, G. Ya; Savanin, S. Yu; Shkunov, V. V.; Shumov, D. E.

    1990-08-01

    A new method is proposed for the derivation of the distribution function of the experimentally determined breakdown thresholds of absorbing microinclusions in a transparent insulator. Expressions are obtained for describing the evolution of this function in the course of irradiation of the insulator with laser pulses of constant energy density. The method is applied to calculate the distribution function of microinclusions in polymethylmethacrylate and the evolution of this function.

  5. Observations of the directional distribution of the wind energy input function over swell waves

    NASA Astrophysics Data System (ADS)

    Shabani, Behnam; Babanin, Alex V.; Baldock, Tom E.

    2016-02-01

    Field measurements of wind stress over shallow water swell traveling in different directions relative to the wind are presented. The directional distribution of the measured stresses is used to confirm the previously proposed but unverified directional distribution of the wind energy input function. The observed wind energy input function is found to follow a much narrower distribution (β∝cos⁡3.6θ) than the Plant (1982) cosine distribution. The observation of negative stress angles at large wind-wave angles, however, indicates that the onset of negative wind shearing occurs at about θ≈ 50°, and supports the use of the Snyder et al. (1981) directional distribution. Taking into account the reverse momentum transfer from swell to the wind, Snyder's proposed parameterization is found to perform exceptionally well in explaining the observed narrow directional distribution of the wind energy input function, and predicting the wind drag coefficients. The empirical coefficient (ɛ) in Snyder's parameterization is hypothesised to be a function of the wave shape parameter, with ɛ value increasing as the wave shape changes between sinusoidal, sawtooth, and sharp-crested shoaling waves.

  6. Finite Element Simulation and Experimental Verification of Internal Stress of Quenched AISI 4140 Cylinders

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Qin, Shengwei; Hao, Qingguo; Chen, Nailu; Zuo, Xunwei; Rong, Yonghua

    2017-03-01

    The study of internal stress in quenched AISI 4140 medium carbon steel is of importance in engineering. In this work, the finite element simulation (FES) was employed to predict the distribution of internal stress in quenched AISI 4140 cylinders with two sizes of diameter based on exponent-modified (Ex-Modified) normalized function. The results indicate that the FES based on Ex-Modified normalized function proposed is better consistent with X-ray diffraction measurements of the stress distribution than FES based on normalized function proposed by Abrassart, Desalos and Leblond, respectively, which is attributed that Ex-Modified normalized function better describes transformation plasticity. Effect of temperature distribution on the phase formation, the origin of residual stress distribution and effect of transformation plasticity function on the residual stress distribution were further discussed.

  7. Ring-averaged ion velocity distribution function probe for laboratory magnetized plasma experiment

    NASA Astrophysics Data System (ADS)

    Kawamori, Eiichirou; Chen, Jinting; Lin, Chiahsuan; Lee, Zongmau

    2017-10-01

    Ring-averaged velocity distribution function of ions at a fixed guiding center position is a fundamental quantity in the gyrokinetic plasma physics. We have developed a diagnostic tool for the ring averaged velocity distribution function of ions for laboratory plasma experiments, which is named as the ring-averaged ion distribution function probe (RIDFP). The RIDFP is a set of ion collectors for different velocities. It is designed to be immersed in magnetized plasmas and achieves momentum selection of incoming ions by the selection of the ion Larmor radii. To nullify the influence of the sheath potential surrounding the RIDFP on the orbits of the incoming ions, the electrostatic potential of the RIDFP body is automatically adjusted to coincide with the space potential of the target plasma with the use of an emissive probe and a voltage follower. The developed RIDFP successfully measured the equilibrium ring-averaged velocity distribution function of a laboratory magnetized plasma, which was in accordance with the Maxwellian distribution having an ion temperature of 0.2 eV.

  8. Statistics of primordial density perturbations from discrete seed masses

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.; Bertschinger, Edmund

    1991-01-01

    The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.

  9. Ring-averaged ion velocity distribution function probe for laboratory magnetized plasma experiment.

    PubMed

    Kawamori, Eiichirou; Chen, Jinting; Lin, Chiahsuan; Lee, Zongmau

    2017-10-01

    Ring-averaged velocity distribution function of ions at a fixed guiding center position is a fundamental quantity in the gyrokinetic plasma physics. We have developed a diagnostic tool for the ring averaged velocity distribution function of ions for laboratory plasma experiments, which is named as the ring-averaged ion distribution function probe (RIDFP). The RIDFP is a set of ion collectors for different velocities. It is designed to be immersed in magnetized plasmas and achieves momentum selection of incoming ions by the selection of the ion Larmor radii. To nullify the influence of the sheath potential surrounding the RIDFP on the orbits of the incoming ions, the electrostatic potential of the RIDFP body is automatically adjusted to coincide with the space potential of the target plasma with the use of an emissive probe and a voltage follower. The developed RIDFP successfully measured the equilibrium ring-averaged velocity distribution function of a laboratory magnetized plasma, which was in accordance with the Maxwellian distribution having an ion temperature of 0.2 eV.

  10. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  11. The Magnetron Method for the Determination of e/m for Electrons: Revisited

    ERIC Educational Resources Information Center

    Azooz, A. A.

    2007-01-01

    Additional information concerning the energy distribution function of electrons in a magnetron diode valve can be extracted. This distribution function is a manifestation of the effect of space charge at the anode. The electron energy distribution function in the magnetron is obtained from studying the variation of the anode current with the…

  12. Distribution of neurons in functional areas of the mouse cerebral cortex reveals quantitatively different cortical zones

    PubMed Central

    Herculano-Houzel, Suzana; Watson, Charles; Paxinos, George

    2013-01-01

    How are neurons distributed along the cortical surface and across functional areas? Here we use the isotropic fractionator (Herculano-Houzel and Lent, 2005) to analyze the distribution of neurons across the entire isocortex of the mouse, divided into 18 functional areas defined anatomically. We find that the number of neurons underneath a surface area (the N/A ratio) varies 4.5-fold across functional areas and neuronal density varies 3.2-fold. The face area of S1 contains the most neurons, followed by motor cortex and the primary visual cortex. Remarkably, while the distribution of neurons across functional areas does not accompany the distribution of surface area, it mirrors closely the distribution of cortical volumes—with the exception of the visual areas, which hold more neurons than expected for their volume. Across the non-visual cortex, the volume of individual functional areas is a shared linear function of their number of neurons, while in the visual areas, neuronal densities are much higher than in all other areas. In contrast, the 18 functional areas cluster into three different zones according to the relationship between the N/A ratio and cortical thickness and neuronal density: these three clusters can be called visual, sensory, and, possibly, associative. These findings are remarkably similar to those in the human cerebral cortex (Ribeiro et al., 2013) and suggest that, like the human cerebral cortex, the mouse cerebral cortex comprises two zones that differ in how neurons form the cortical volume, and three zones that differ in how neurons are distributed underneath the cortical surface, possibly in relation to local differences in connectivity through the white matter. Our results suggest that beyond the developmental divide into visual and non-visual cortex, functional areas initially share a common distribution of neurons along the parenchyma that become delimited into functional areas according to the pattern of connectivity established later. PMID:24155697

  13. Longitudinal Distribution of the Functional Feeding Groups of Aquatic Insects in Streams of the Brazilian Cerrado Savanna.

    PubMed

    Brasil, L S; Juen, L; Batista, J D; Pavan, M G; Cabette, H S R

    2014-10-01

    We demonstrate that the distribution of the functional feeding groups of aquatic insects is related to hierarchical patch dynamics. Patches are sites with unique environmental and functional characteristics that are discontinuously distributed in time and space within a lotic system. This distribution predicts that the occurrence of species will be based predominantly on their environmental requirements. We sampled three streams within the same drainage basin in the Brazilian Cerrado savanna, focusing on waterfalls and associated habitats (upstream, downstream), representing different functional zones. We collected 2,636 specimens representing six functional feeding groups (FFGs): brushers, collector-gatherers, collector-filterers, shredders, predators, and scrapers. The frequency of occurrence of these groups varied significantly among environments. This variation appeared to be related to the distinct characteristics of the different habitat patches, which led us to infer that the hierarchical patch dynamics model can best explain the distribution of functional feeding groups in minor lotic environments, such as waterfalls.

  14. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  15. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  16. Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Chang, Lei; Liu, Yu-xin

    2017-07-01

    We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.

  17. Design of Distributed Engine Control Systems for Stability Under Communication Packet Dropouts

    DTIC Science & Technology

    2009-08-01

    remarks. II. Distributed Engine Control Systems A. FADEC based on Distributed Engine Control Architecture (DEC) In Distributed Engine...Control, the functions of Full Authority Digital Engine Control ( FADEC ) are distributed at the component level. Each sensor/actuator is to be replaced...diagnostics and health management functionality. Dual channel digital serial communication network is used to connect these smart modules with FADEC . Fig

  18. Cerebral palsy in Victoria: motor types, topography and gross motor function.

    PubMed

    Howard, Jason; Soo, Brendan; Graham, H Kerr; Boyd, Roslyn N; Reid, Sue; Lanigan, Anna; Wolfe, Rory; Reddihough, Dinah S

    2005-01-01

    To study the relationships between motor type, topographical distribution and gross motor function in a large, population-based cohort of children with cerebral palsy (CP), from the State of Victoria, and compare this cohort to similar cohorts from other countries. An inception cohort was generated from the Victorian Cerebral Palsy Register (VCPR) for the birth years 1990-1992. Demographic information, motor types and topographical distribution were obtained from the register and supplemented by grading gross motor function according to the Gross Motor Function Classification System (GMFCS). Complete data were obtained on 323 (86%) of 374 children in the cohort. Gross motor function varied from GMFCS level I (35%) to GMFCS level V (18%) and was similar in distribution to a contemporaneous Swedish cohort. There was a fairly even distribution across the topographical distributions of hemiplegia (35%), diplegia (28%) and quadriplegia (37%) with a large majority of young people having the spastic motor type (86%). The VCPR is ideal for population-based studies of gross motor function in children with CP. Gross motor function is similar in populations of children with CP in developed countries but the comparison of motor types and topographical distribution is difficult because of lack of consensus with classification systems. Use of the GMFCS provides a valid and reproducible method for clinicians to describe gross motor function in children with CP using a universal language.

  19. Distributing Leadership to Make Schools Smarter: Taking the Ego out of the System

    ERIC Educational Resources Information Center

    Leithwood, Kenneth; Mascall, Blair; Strauss, Tiiu; Sacks, Robin; Memon, Nadeem; Yashkina, Anna

    2007-01-01

    In this study, we inquired about patterns of leadership distribution, as well as which leadership functions were performed by whom, the characteristics of nonadministrative leaders, and the factors promoting and inhibiting the distribution of leadership functions. We consider our account of distributed leadership in this district to be a probable…

  20. The concept of temperature in space plasmas

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2017-12-01

    Independently of the initial distribution function, once the system is thermalized, its particles are stabilized into a specific distribution function parametrized by a temperature. Classical particle systems in thermal equilibrium have their phase-space distribution stabilized into a Maxwell-Boltzmann function. In contrast, space plasmas are particle systems frequently described by stationary states out of thermal equilibrium, namely, their distribution is stabilized into a function that is typically described by kappa distributions. The temperature is well-defined for systems at thermal equilibrium or stationary states described by kappa distributions. This is based on the equivalence of the two fundamental definitions of temperature, that is (i) the kinetic definition of Maxwell (1866) and (ii) the thermodynamic definition of Clausius (1862). This equivalence holds either for Maxwellians or kappa distributions, leading also to the equipartition theorem. The temperature and kappa index (together with density) are globally independent parameters characterizing the kappa distribution. While there is no equation of state or any universal relation connecting these parameters, various local relations may exist along the streamlines of space plasmas. Observations revealed several types of such local relations among plasma thermal parameters.

  1. Analyzing Distributed Functions in an Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Massie, Michael J.

    2010-01-01

    Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.

  2. Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times

    NASA Astrophysics Data System (ADS)

    Fa, Kwok Sau

    2012-09-01

    In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.

  3. Functions of cumulative distribution of attenuation due to rain on an interval from 9.5 Km A to 17.8 GHz

    NASA Technical Reports Server (NTRS)

    Fedi, F.; Migliorini, P.

    1981-01-01

    Measurement results of attenuation due to rain are reported. Cumulative distribution functions of the attenuation found in three connections are described. Differences between the distribution functions and different polarization frequencies are demonstrated. The possibilty of establishing a bond between the statistics of annual attenuation and worst month attenuation is explored.

  4. Differences in forest plant functional trait distributions across land-use and productivity gradients

    Treesearch

    Margaret M. Mayfield; John M. Dwyer; Loic Chalmandrier; Jessie A. Wells; Stephen P. Bonser; Carla P. Catterall; Fabrice DeClerck; Yi Ding; Jennifer M. Fraterrigo; Daniel J. Metcalfe; Cibele Queiroz; Peter A. Vesk; John W. Morgan

    2013-01-01

    • Premise of study: Plant functional traits are commonly used as proxies for plant responses to environmental challenges, yet few studies have explored how functional trait distributions differ across gradients of land-use change. By comparing trait distributions in intact forests with those across land-use change gradients, we can improve our understanding of the ways...

  5. Study on probability distribution of prices in electricity market: A case study of zhejiang province, china

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.

    2009-05-01

    The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.

  6. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  7. Impact of geometrical properties on permeability and fluid phase distribution in porous media

    NASA Astrophysics Data System (ADS)

    Lehmann, P.; Berchtold, M.; Ahrenholz, B.; Tölke, J.; Kaestner, A.; Krafczyk, M.; Flühler, H.; Künsch, H. R.

    2008-09-01

    To predict fluid phase distribution in porous media, the effect of geometric properties on flow processes must be understood. In this study, we analyze the effect of volume, surface, curvature and connectivity (the four Minkowski functionals) on the hydraulic conductivity and the water retention curve. For that purpose, we generated 12 artificial structures with 800 3 voxels (the units of a 3D image) and compared them with a scanned sand sample of the same size. The structures were generated with a Boolean model based on a random distribution of overlapping ellipsoids whose size and shape were chosen to fulfill the criteria of the measured functionals. The pore structure of sand material was mapped with X-rays from synchrotrons. To analyze the effect of geometry on water flow and fluid distribution we carried out three types of analysis: Firstly, we computed geometrical properties like chord length, distance from the solids, pore size distribution and the Minkowski functionals as a function of pore size. Secondly, the fluid phase distribution as a function of the applied pressure was calculated with a morphological pore network model. Thirdly, the permeability was determined using a state-of-the-art lattice-Boltzmann method. For the simulated structure with the true Minkowski functionals the pores were larger and the computed air-entry value of the artificial medium was reduced to 85% of the value obtained from the scanned sample. The computed permeability for the geometry with the four fitted Minkowski functionals was equal to the permeability of the scanned image. The permeability was much more sensitive to the volume and surface than to curvature and connectivity of the medium. We conclude that the Minkowski functionals are not sufficient to characterize the geometrical properties of a porous structure that are relevant for the distribution of two fluid phases. Depending on the procedure to generate artificial structures with predefined Minkowski functionals, structures differing in pore size distribution can be obtained.

  8. Parallel Measurements of Light Scattering and Characterization of Marine Particles in Water: An Evaluation of Methodology

    DTIC Science & Technology

    2008-01-01

    A second objective is to characterize variability in the volume scattering function and particle size distribution for various optical water types...volume scattering function (VSF) and the particle size distribution (PSD) • Analysis of in situ optical measurements and particle size distributions ...SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION /AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY

  9. Standard services for the capture, processing, and distribution of packetized telemetry data

    NASA Technical Reports Server (NTRS)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  10. Turbulent Equilibria for Charged Particles in Space

    NASA Astrophysics Data System (ADS)

    Yoon, Peter

    2017-04-01

    The solar wind electron distribution function is apparently composed of several components including non-thermal tail population. The electron distribution that contains energetic tail feature is well fitted with the kappa distribution function. The solar wind protons also possess quasi power-law tail distribution function that is well fitted with an inverse power law model. The present paper discusses the latest theoretical development regarding the dynamical steady-state solution of electrons and Langmuir turbulence that are in turbulent equilibrium. According to such a theory, the Maxwellian and kappa distribution functions for the electrons emerge as the only two possible solution that satisfy the steady-state weak turbulence plasma kinetic equation. For the proton inverse power-law tail problem, a similar turbulent equilibrium solution can be conceived of, but instead of high-frequency Langmuir fluctuation, the theory involves low-frequency kinetic Alfvenic turbulence. The steady-state solution of the self-consistent proton kinetic equation and wave kinetic equation for Alfvenic waves can be found in order to obtain a self-consistent solution for the inverse power law tail distribution function.

  11. Comparable Analysis of the Distribution Functions of Runup Heights of the 1896, 1933 and 2011 Japanese Tsunamis in the Sanriku Area

    NASA Astrophysics Data System (ADS)

    Choi, B. H.; Min, B. I.; Yoshinobu, T.; Kim, K. O.; Pelinovsky, E.

    2012-04-01

    Data from a field survey of the 2011 tsunami in the Sanriku area of Japan is presented and used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated using a theoretical log-normal curve [Choi et al, 2002]. The characteristics of the distribution functions derived from the runup-heights data obtained during the 2011 event are compared with data from two previous gigantic tsunamis (1896 and 1933) that occurred in almost the same region. The number of observations during the last tsunami is very large (more than 5,247), which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and number of observations suggested by Kajiura [1983]. The distribution function of the 2011 event demonstrates the sensitivity to the number of observation points (many of them cannot be considered independent measurements) and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.

  12. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  13. A Robust Function to Return the Cumulative Density of Non-Central F Distributions in Microsoft Office Excel

    ERIC Educational Resources Information Center

    Nelson, James Byron

    2016-01-01

    The manuscript presents a Visual Basic[superscript R] for Applications function that operates within Microsoft Office Excel[superscript R] to return the area below the curve for a given F within a specified non-central F distribution. The function will be of use to Excel users without programming experience wherever a non-central F distribution is…

  14. Covariant extension of the GPD overlap representation at low Fock states

    DOE PAGES

    Chouika, N.; Mezrag, C.; Moutarde, H.; ...

    2017-12-26

    Here, we present a novel approach to compute generalized parton distributions within the lightfront wave function overlap framework. We show how to systematically extend generalized parton distributions computed within the DGLAP region to the ERBL one, fulfilling at the same time both the polynomiality and positivity conditions. We exemplify our method using pion lightfront wave functions inspired by recent results of non-perturbative continuum techniques and algebraic nucleon lightfront wave functions. We also test the robustness of our algorithm on reggeized phenomenological parameterizations. This approach paves the way to a better understanding of the nucleon structure from non-perturbative techniques and tomore » a unification of generalized parton distributions and transverse momentum dependent parton distribution functions phenomenology through lightfront wave functions.« less

  15. The eigenvalue problem in phase space.

    PubMed

    Cohen, Leon

    2018-06-30

    We formulate the standard quantum mechanical eigenvalue problem in quantum phase space. The equation obtained involves the c-function that corresponds to the quantum operator. We use the Wigner distribution for the phase space function. We argue that the phase space eigenvalue equation obtained has, in addition to the proper solutions, improper solutions. That is, solutions for which no wave function exists which could generate the distribution. We discuss the conditions for ascertaining whether a position momentum function is a proper phase space distribution. We call these conditions psi-representability conditions, and show that if these conditions are imposed, one extracts the correct phase space eigenfunctions. We also derive the phase space eigenvalue equation for arbitrary phase space distributions functions. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Exact probability distribution function for the volatility of cumulative production

    NASA Astrophysics Data System (ADS)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  17. Solution of QCD⊗QED coupled DGLAP equations at NLO

    NASA Astrophysics Data System (ADS)

    Zarrin, S.; Boroun, G. R.

    2017-09-01

    In this work, we present an analytical solution for QCD⊗QED coupled Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations at the leading order (LO) accuracy in QED and next-to-leading order (NLO) accuracy in perturbative QCD using double Laplace transform. This technique is applied to obtain the singlet, gluon and photon distribution functions and also the proton structure function. We also obtain contribution of photon in proton at LO and NLO at high energy and successfully compare the proton structure function with HERA data [1] and APFEL results [2]. Some comparisons also have been done for the singlet and gluon distribution functions with the MSTW results [3]. In addition, the contribution of photon distribution function inside the proton has been compared with results of MRST [4] and with the contribution of sea quark distribution functions which obtained by MSTW [3] and CTEQ6M [5].

  18. The two-point correlation function for groups of galaxies in the Center for Astrophysics redshift survey

    NASA Technical Reports Server (NTRS)

    Ramella, Massimo; Geller, Margaret J.; Huchra, John P.

    1990-01-01

    The large-scale distribution of groups of galaxies selected from complete slices of the CfA redshift survey extension is examined. The survey is used to reexamine the contribution of group members to the galaxy correlation function. The relationship between the correlation function for groups and those calculated for rich clusters is discussed, and the results for groups are examined as an extension of the relation between correlation function amplitude and richness. The group correlation function indicates that groups and individual galaxies are equivalent tracers of the large-scale matter distribution. The distribution of group centers is equivalent to random sampling of the galaxy distribution. The amplitude of the correlation function for groups is consistent with an extrapolation of the amplitude-richness relation for clusters. The amplitude scaled by the mean intersystem separation is also consistent with results for richer clusters.

  19. Low-energy ion distribution functions on a magnetically quiet day at geostationary altitude /L = 7/

    NASA Technical Reports Server (NTRS)

    Singh, N.; Raitt, W. J.; Yasuhara, F.

    1982-01-01

    Ion energy and pitch angle distribution functions are examined for a magnetically quiet day using averaged data from ATS 6. For both field-aligned and perpendicular fluxes, the populations have a mixture of characteristic energies, and the distribution functions can be fairly well approximated by Maxwellian distributions over three different energy bands in the range 3-600 eV. Pitch angle distributions varying with local time, and energy distributions are used to compute total ion density. Pitch angle scattering mechanisms responsible for the observed transformation of pitch angle distribution are examined, and it is found that a magnetic noise of a certain power spectral density belonging to the electromagnetic ion cyclotron mode near the ion cyclotron frequency can be effective in trapping the field aligned fluxes by pitch angle scattering.

  20. Thermodynamic and redox properties of graphene oxides for lithium-ion battery applications: a first principles density functional theory modeling approach.

    PubMed

    Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon

    2016-07-27

    Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials.

  1. Statistical distribution sampling

    NASA Technical Reports Server (NTRS)

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  2. Study of dust particle charging in weakly ionized inert gases taking into account the nonlocality of the electron energy distribution function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippov, A. V., E-mail: fav@triniti.ru; Dyatko, N. A.; Kostenko, A. S.

    2014-11-15

    The charging of dust particles in weakly ionized inert gases at atmospheric pressure has been investigated. The conditions under which the gas is ionized by an external source, a beam of fast electrons, are considered. The electron energy distribution function in argon, krypton, and xenon has been calculated for three rates of gas ionization by fast electrons: 10{sup 13}, 10{sup 14}, and 10{sup 15} cm{sup −1}. A model of dust particle charging with allowance for the nonlocal formation of the electron energy distribution function in the region of strong plasma quasi-neutrality violation around the dust particle is described. The nonlocalitymore » is taken into account in an approximation where the distribution function is a function of only the total electron energy. Comparative calculations of the dust particle charge with and without allowance for the nonlocality of the electron energy distribution function have been performed. Allowance for the nonlocality is shown to lead to a noticeable increase in the dust particle charge due to the influence of the group of hot electrons from the tail of the distribution function. It has been established that the screening constant virtually coincides with the smallest screening constant determined according to the asymptotic theory of screening with the electron transport and recombination coefficients in an unperturbed plasma.« less

  3. Discriminating topology in galaxy distributions using network analysis

    NASA Astrophysics Data System (ADS)

    Hong, Sungryong; Coutinho, Bruno C.; Dey, Arjun; Barabási, Albert-L.; Vogelsberger, Mark; Hernquist, Lars; Gebhardt, Karl

    2016-07-01

    The large-scale distribution of galaxies is generally analysed using the two-point correlation function. However, this statistic does not capture the topology of the distribution, and it is necessary to resort to higher order correlations to break degeneracies. We demonstrate that an alternate approach using network analysis can discriminate between topologically different distributions that have similar two-point correlations. We investigate two galaxy point distributions, one produced by a cosmological simulation and the other by a Lévy walk. For the cosmological simulation, we adopt the redshift z = 0.58 slice from Illustris and select galaxies with stellar masses greater than 108 M⊙. The two-point correlation function of these simulated galaxies follows a single power law, ξ(r) ˜ r-1.5. Then, we generate Lévy walks matching the correlation function and abundance with the simulated galaxies. We find that, while the two simulated galaxy point distributions have the same abundance and two-point correlation function, their spatial distributions are very different; most prominently, filamentary structures, absent in Lévy fractals. To quantify these missing topologies, we adopt network analysis tools and measure diameter, giant component, and transitivity from networks built by a conventional friends-of-friends recipe with various linking lengths. Unlike the abundance and two-point correlation function, these network quantities reveal a clear separation between the two simulated distributions; therefore, the galaxy distribution simulated by Illustris is not a Lévy fractal quantitatively. We find that the described network quantities offer an efficient tool for discriminating topologies and for comparing observed and theoretical distributions.

  4. The correlation function for density perturbations in an expanding universe. IV - The evolution of the correlation function. [galaxy distribution

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1979-01-01

    The evolution of the two-point correlation function for the large-scale distribution of galaxies in an expanding universe is studied on the assumption that the perturbation densities lie in a Gaussian distribution centered on any given mass scale. The perturbations are evolved according to the Friedmann equation, and the correlation function for the resulting distribution of perturbations at the present epoch is calculated. It is found that: (1) the computed correlation function gives a satisfactory fit to the observed function in cosmological models with a density parameter (Omega) of approximately unity, provided that a certain free parameter is suitably adjusted; (2) the power-law slope in the nonlinear regime reflects the initial fluctuation spectrum, provided that the density profile of individual perturbations declines more rapidly than the -2.4 power of distance; and (3) both positive and negative contributions to the correlation function are predicted for cosmological models with Omega less than unity.

  5. Naima: a Python package for inference of particle distribution properties from nonthermal spectra

    NASA Astrophysics Data System (ADS)

    Zabalza, V.

    2015-07-01

    The ultimate goal of the observation of nonthermal emission from astrophysical sources is to understand the underlying particle acceleration and evolution processes, and few tools are publicly available to infer the particle distribution properties from the observed photon spectra from X-ray to VHE gamma rays. Here I present naima, an open source Python package that provides models for nonthermal radiative emission from homogeneous distribution of relativistic electrons and protons. Contributions from synchrotron, inverse Compton, nonthermal bremsstrahlung, and neutral-pion decay can be computed for a series of functional shapes of the particle energy distributions, with the possibility of using user-defined particle distribution functions. In addition, naima provides a set of functions that allow to use these models to fit observed nonthermal spectra through an MCMC procedure, obtaining probability distribution functions for the particle distribution parameters. Here I present the models and methods available in naima and an example of their application to the understanding of a galactic nonthermal source. naima's documentation, including how to install the package, is available at http://naima.readthedocs.org.

  6. Evaluation of the Three Parameter Weibull Distribution Function for Predicting Fracture Probability in Composite Materials

    DTIC Science & Technology

    1978-03-01

    for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented

  7. Use of the Weibull function to predict future diameter distributions from current plot data

    Treesearch

    Quang V. Cao

    2012-01-01

    The Weibull function has been widely used to characterize diameter distributions in forest stands. The future diameter distribution of a forest stand can be predicted by use of a Weibull probability density function from current inventory data for that stand. The parameter recovery approach has been used to “recover” the Weibull parameters from diameter moments or...

  8. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  9. Characteristics of ion distribution functions in dipolarizing flux bundles: Event studies

    NASA Astrophysics Data System (ADS)

    Runov, A.; Angelopoulos, V.; Artemyev, A.; Birn, J.; Pritchett, P. L.; Zhou, X.-Z.

    2017-06-01

    Taking advantage of multipoint observations from a repeating configuration of the five Time History of Events and Macroscale Interactions during Substorms (THEMIS) probes separated by 1 to 2 Earth radii (RE) along X, Y, and Z in the geocentric solar magnetospheric system (GSM), we study ion distribution functions collected by the probes during three dipolarizing flux bundle (DFB) events observed at geocentric distances 9 < R < 14 RE. By comparing these probes' observations, we characterize changes in the ion distribution functions with respect to probe separation along the X and Y GSM directions and |Bx| levels, which characterize the distance from the neutral sheet. We found that the characteristics of the ion distribution functions strongly depended on the |Bx| level, whereas changes with respect to X and Y were minor. In all three events, ion distribution functions f(v) observed inside DFBs were organized by magnetic and electric fields. The probes near the magnetic equator observed perpendicular anisotropy of the phase space density in the range between thermal energy and twice the thermal energy, although the distribution in the ambient plasma sheet was isotropic. The anisotropic ion distribution in DFBs injected toward the inner magnetosphere may provide the free energy for waves and instabilities, which are important elements of particle energization.

  10. Grid Integrated Distributed PV (GridPV) Version 2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reno, Matthew J.; Coogan, Kyle

    2014-12-01

    This manual provides the documentation of the MATLAB toolbox of functions for using OpenDSS to simulate the impact of solar energy on the distribution system. The majority of the functio ns are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in th e OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included tomore » show potential uses of the toolbox functions. Each function i n the toolbox is documented with the function use syntax, full description, function input list, function output list, example use, and example output.« less

  11. The VANDELS ESO public spectroscopic survey

    NASA Astrophysics Data System (ADS)

    McLure, R. J.; Pentericci, L.; Cimatti, A.; Dunlop, J. S.; Elbaz, D.; Fontana, A.; Nandra, K.; Amorin, R.; Bolzonella, M.; Bongiorno, A.; Carnall, A. C.; Castellano, M.; Cirasuolo, M.; Cucciati, O.; Cullen, F.; De Barros, S.; Finkelstein, S. L.; Fontanot, F.; Franzetti, P.; Fumana, M.; Gargiulo, A.; Garilli, B.; Guaita, L.; Hartley, W. G.; Iovino, A.; Jarvis, M. J.; Juneau, S.; Karman, W.; Maccagni, D.; Marchi, F.; Mármol-Queraltó, E.; Pompei, E.; Pozzetti, L.; Scodeggio, M.; Sommariva, V.; Talia, M.; Almaini, O.; Balestra, I.; Bardelli, S.; Bell, E. F.; Bourne, N.; Bowler, R. A. A.; Brusa, M.; Buitrago, F.; Caputi, K. I.; Cassata, P.; Charlot, S.; Citro, A.; Cresci, G.; Cristiani, S.; Curtis-Lake, E.; Dickinson, M.; Fazio, G. G.; Ferguson, H. C.; Fiore, F.; Franco, M.; Fynbo, J. P. U.; Galametz, A.; Georgakakis, A.; Giavalisco, M.; Grazian, A.; Hathi, N. P.; Jung, I.; Kim, S.; Koekemoer, A. M.; Khusanova, Y.; Fèvre, O. Le; Lotz, J. M.; Mannucci, F.; Maltby, D. T.; Matsuoka, K.; McLeod, D. J.; Mendez-Hernandez, H.; Mendez-Abreu, J.; Mignoli, M.; Moresco, M.; Mortlock, A.; Nonino, M.; Pannella, M.; Papovich, C.; Popesso, P.; Rosario, D. P.; Salvato, M.; Santini, P.; Schaerer, D.; Schreiber, C.; Stark, D. P.; Tasca, L. A. M.; Thomas, R.; Treu, T.; Vanzella, E.; Wild, V.; Williams, C. C.; Zamorani, G.; Zucca, E.

    2018-05-01

    VANDELS is a uniquely-deep spectroscopic survey of high-redshift galaxies with the VIMOS spectrograph on ESO's Very Large Telescope (VLT). The survey has obtained ultra-deep optical (0.48 < λ < 1.0 μm) spectroscopy of ≃2100 galaxies within the redshift interval 1.0 ≤ z ≤ 7.0, over a total area of ≃ 0.2 deg2 centred on the CANDELS UDS and CDFS fields. Based on accurate photometric redshift pre-selection, 85% of the galaxies targeted by VANDELS were selected to be at z ≥ 3. Exploiting the red sensitivity of the refurbished VIMOS spectrograph, the fundamental aim of the survey is to provide the high signal-to-noise ratio spectra necessary to measure key physical properties such as stellar population ages, masses, metallicities and outflow velocities from detailed absorption-line studies. Using integration times calculated to produce an approximately constant signal-to-noise ratio (20 < tint < 80 hours), the VANDELS survey targeted: a) bright star-forming galaxies at 2.4 ≤ z ≤ 5.5, b) massive quiescent galaxies at 1.0 ≤ z ≤ 2.5, c) fainter star-forming galaxies at 3.0 ≤ z ≤ 7.0 and d) X-ray/Spitzer-selected active galactic nuclei and Herschel-detected galaxies. By targeting two extragalactic survey fields with superb multi-wavelength imaging data, VANDELS will produce a unique legacy data set for exploring the physics underpinning high-redshift galaxy evolution. In this paper we provide an overview of the VANDELS survey designed to support the science exploitation of the first ESO public data release, focusing on the scientific motivation, survey design and target selection.

  12. High expression in leaves of the zinc hyperaccumulator Arabidopsis halleri of AhMHX, a homolog of an Arabidopsis thaliana vacuolar metal/proton exchanger.

    PubMed

    Elbaz, Benayahu; Shoshani-Knaani, Noa; David-Assael, Ora; Mizrachy-Dagri, Talya; Mizrahi, Keren; Saul, Helen; Brook, Emil; Berezin, Irina; Shaul, Orit

    2006-06-01

    Zn hyperaccumulator plants sequester Zn into their shoot vacuoles. To date, the only transporters implicated in Zn sequestration into the vacuoles of hyperaccumulator plants are cation diffusion facilitators (CDFs). We investigated the expression in Arabidopsis halleri of a homolog of AtMHX, an A. thaliana tonoplast transporter that exchanges protons with Mg, Zn and Fe ions. A. halleri has a single copy of a homologous gene, encoding a protein that shares 98% sequence identity with AtMHX. Western blot analysis with vacuolar-enriched membrane fractions suggests localization of AhMHX in the tonoplast. The levels of MHX proteins are much higher in leaves of A. halleri than in leaves of the non-accumulator plant A. thaliana. At the same time, the levels of MHX transcripts are similar in leaves of the two species. This suggests that the difference in MHX levels is regulated at the post-transcriptional level. In vitro translation studies indicated that the difference between AhMHX and AtMHX expression is not likely to result from the variations in the sequence of their 5' untranslated regions (5'UTRs). The high expression of AhMHX in A. halleri leaves is constitutive and not significantly affected by the metal status of the plants. In both species, MHX transcript levels are higher in leaves than in roots, but the difference is higher in A. halleri. Metal sequestration into root vacuoles was suggested to inhibit hyperaccumulation in the shoot. Our data implicate AhMHX as a candidate gene in metal accumulation or tolerance in A. halleri.

  13. VizieR Online Data Catalog: VANDELS High-Redshift Galaxy Evolution (McLure+, 2017)

    NASA Astrophysics Data System (ADS)

    McLure, R.; Pentericci, L.; Vandels Team

    2017-11-01

    This is the first data release (DR1) of the VANDELS survey, an ESO public spectroscopy survey targeting the high-redshift Universe. The VANDELS survey uses the VIMOS spectrograph on ESO's VLT to obtain ultra-deep, medium resolution, optical spectra of galaxies within the UKIDSS Ultra Deep Survey (UDS) and Chandra Deep Field South (CDFS) survey fields (0.2 sq. degree total area). Using robust photometric redshift pre-selection, VANDELS is targeting ~2100 galaxies in the redshift interval 1.0=3. In addition, VANDELS is targeting a substantial number of passive galaxies in the redshift interval 1.0

  14. The VANDELS ESO spectroscopic survey

    NASA Astrophysics Data System (ADS)

    McLure, R. J.; Pentericci, L.; Cimatti, A.; Dunlop, J. S.; Elbaz, D.; Fontana, A.; Nandra, K.; Amorin, R.; Bolzonella, M.; Bongiorno, A.; Carnall, A. C.; Castellano, M.; Cirasuolo, M.; Cucciati, O.; Cullen, F.; De Barros, S.; Finkelstein, S. L.; Fontanot, F.; Franzetti, P.; Fumana, M.; Gargiulo, A.; Garilli, B.; Guaita, L.; Hartley, W. G.; Iovino, A.; Jarvis, M. J.; Juneau, S.; Karman, W.; Maccagni, D.; Marchi, F.; Mármol-Queraltó, E.; Pompei, E.; Pozzetti, L.; Scodeggio, M.; Sommariva, V.; Talia, M.; Almaini, O.; Balestra, I.; Bardelli, S.; Bell, E. F.; Bourne, N.; Bowler, R. A. A.; Brusa, M.; Buitrago, F.; Caputi, K. I.; Cassata, P.; Charlot, S.; Citro, A.; Cresci, G.; Cristiani, S.; Curtis-Lake, E.; Dickinson, M.; Fazio, G. G.; Ferguson, H. C.; Fiore, F.; Franco, M.; Fynbo, J. P. U.; Galametz, A.; Georgakakis, A.; Giavalisco, M.; Grazian, A.; Hathi, N. P.; Jung, I.; Kim, S.; Koekemoer, A. M.; Khusanova, Y.; Le Fèvre, O.; Lotz, J. M.; Mannucci, F.; Maltby, D. T.; Matsuoka, K.; McLeod, D. J.; Mendez-Hernandez, H.; Mendez-Abreu, J.; Mignoli, M.; Moresco, M.; Mortlock, A.; Nonino, M.; Pannella, M.; Papovich, C.; Popesso, P.; Rosario, D. P.; Salvato, M.; Santini, P.; Schaerer, D.; Schreiber, C.; Stark, D. P.; Tasca, L. A. M.; Thomas, R.; Treu, T.; Vanzella, E.; Wild, V.; Williams, C. C.; Zamorani, G.; Zucca, E.

    2018-05-01

    VANDELS is a uniquely-deep spectroscopic survey of high-redshift galaxies with the VIMOS spectrograph on ESO's Very Large Telescope (VLT). The survey has obtained ultra-deep optical (0.48 < λ < 1.0 μm) spectroscopy of ≃2100 galaxies within the redshift interval 1.0 ≤ z ≤ 7.0, over a total area of ≃ 0.2 deg2 centred on the CANDELS UDS and CDFS fields. Based on accurate photometric redshift pre-selection, 85% of the galaxies targeted by VANDELS were selected to be at z ≥ 3. Exploiting the red sensitivity of the refurbished VIMOS spectrograph, the fundamental aim of the survey is to provide the high signal-to-noise ratio spectra necessary to measure key physical properties such as stellar population ages, masses, metallicities and outflow velocities from detailed absorption-line studies. Using integration times calculated to produce an approximately constant signal-to-noise ratio (20 < tint < 80 hours), the VANDELS survey targeted: a) bright star-forming galaxies at 2.4 ≤ z ≤ 5.5, b) massive quiescent galaxies at 1.0 ≤ z ≤ 2.5, c) fainter star-forming galaxies at 3.0 ≤ z ≤ 7.0 and d) X-ray/Spitzer-selected active galactic nuclei and Herschel-detected galaxies. By targeting two extragalactic survey fields with superb multi-wavelength imaging data, VANDELS will produce a unique legacy data set for exploring the physics underpinning high-redshift galaxy evolution. In this paper we provide an overview of the VANDELS survey designed to support the science exploitation of the first ESO public data release, focusing on the scientific motivation, survey design and target selection.

  15. Parton distribution functions from reduced Ioffe-time distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Jian-Hui; Chen, Jiunn-Wei; Monahan, Christopher

    2018-04-01

    We show that the correct way to extract parton distribution functions from the reduced Ioffe-time distribution, a ratio of the Ioffe-time distribution for a moving hadron and a hadron at rest, is through a factorization formula. This factorization exists because, at small distances, forming the ratio does not change the infrared behavior of the numerator, which is factorizable. We illustrate the effect of such a factorization by applying it to results in the literature.

  16. Properties of two-mode squeezed number states

    NASA Technical Reports Server (NTRS)

    Chizhov, Alexei V.; Murzakhmetov, B. K.

    1994-01-01

    Photon statistics and phase properties of two-mode squeezed number states are studied. It is shown that photon number distribution and Pegg-Barnett phase distribution for such states have similar (N + 1)-peak structure for nonzero value of the difference in the number of photons between modes. Exact analytical formulas for phase distributions based on different phase approaches are derived. The Pegg-Barnett phase distribution and the phase quasiprobability distribution associated with the Wigner function are close to each other, while the phase quasiprobability distribution associated with the Q function carries less phase information.

  17. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  18. Thermodynamics and statistical mechanics. [thermodynamic properties of gases

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The basic thermodynamic properties of gases are reviewed and the relations between them are derived from the first and second laws. The elements of statistical mechanics are then formulated and the partition function is derived. The classical form of the partition function is used to obtain the Maxwell-Boltzmann distribution of kinetic energies in the gas phase and the equipartition of energy theorem is given in its most general form. The thermodynamic properties are all derived as functions of the partition function. Quantum statistics are reviewed briefly and the differences between the Boltzmann distribution function for classical particles and the Fermi-Dirac and Bose-Einstein distributions for quantum particles are discussed.

  19. Ramsey Interference in One-Dimensional Systems: The Full Distribution Function of Fringe Contrast as a Probe of Many-Body Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitagawa, Takuya; Pielawa, Susanne; Demler, Eugene

    2010-06-25

    We theoretically analyze Ramsey interference experiments in one-dimensional quasicondensates and obtain explicit expressions for the time evolution of full distribution functions of fringe contrast. We show that distribution functions contain unique signatures of the many-body mechanism of decoherence. We argue that Ramsey interference experiments provide a powerful tool for analyzing strongly correlated nature of 1D interacting systems.

  20. Real-time generation of the Wigner distribution of complex functions using phase conjugation in photorefractive materials.

    PubMed

    Sun, P C; Fainman, Y

    1990-09-01

    An optical processor for real-time generation of the Wigner distribution of complex amplitude functions is introduced. The phase conjugation of the input signal is accomplished by a highly efficient self-pumped phase conjugator based on a 45 degrees -cut barium titanate photorefractive crystal. Experimental results on the real-time generation of Wigner distribution slices for complex amplitude two-dimensional optical functions are presented and discussed.

  1. Application of Image Analysis for Characterization of Spatial Arrangements of Features in Microstructure

    NASA Technical Reports Server (NTRS)

    Louis, Pascal; Gokhale, Arun M.

    1995-01-01

    A number of microstructural processes are sensitive to the spatial arrangements of features in microstructure. However, very little attention has been given in the past to the experimental measurements of the descriptors of microstructural distance distributions due to the lack of practically feasible methods. We present a digital image analysis procedure to estimate the micro-structural distance distributions. The application of the technique is demonstrated via estimation of K function, radial distribution function, and nearest-neighbor distribution function of hollow spherical carbon particulates in a polymer matrix composite, observed in a metallographic section.

  2. Universal energy distribution for interfaces in a random-field environment

    NASA Astrophysics Data System (ADS)

    Fedorenko, Andrei A.; Stepanow, Semjon

    2003-11-01

    We study the energy distribution function ρ(E) for interfaces in a random-field environment at zero temperature by summing the leading terms in the perturbation expansion of ρ(E) in powers of the disorder strength, and by taking into account the nonperturbational effects of the disorder using the functional renormalization group. We have found that the average and the variance of the energy for one-dimensional interface of length L behave as, R∝L ln L, ΔER∝L, while the distribution function of the energy tends for large L to the Gumbel distribution of the extreme value statistics.

  3. Determination of Distance Distribution Functions by Singlet-Singlet Energy Transfer

    PubMed Central

    Cantor, Charles R.; Pechukas, Philip

    1971-01-01

    The efficiency of energy transfer between two chromophores can be used to define an apparent donor-acceptor distance, which in flexible systems will depend on the R0 of the chromophores. If efficiency is measured as a function of R0, it will be possible to determine the actual distribution function of donor-acceptor distances. Numerical procedures are described for extracting this information from experimental data. They should be most useful for distribution functions with mean values from 20-30 Å (2-3 nm). This technique should provide considerably more detailed information on end-to-end distributions of oligomers than has hitherto been available. It should also be useful for describing, in detail, conformational flexibility in other large molecules. PMID:16591942

  4. The distribution of genome shared identical by descent for a pair of full sibs by means of the continuous time Markov chain

    NASA Astrophysics Data System (ADS)

    Julie, Hongki; Pasaribu, Udjianna S.; Pancoro, Adi

    2015-12-01

    This paper will allow Markov Chain's application in genome shared identical by descent by two individual at full sibs model. The full sibs model was a continuous time Markov Chain with three state. In the full sibs model, we look for the cumulative distribution function of the number of sub segment which have 2 IBD haplotypes from a segment of the chromosome which the length is t Morgan and the cumulative distribution function of the number of sub segment which have at least 1 IBD haplotypes from a segment of the chromosome which the length is t Morgan. This cumulative distribution function will be developed by the moment generating function.

  5. Consistent role of Quaternary climate change in shaping current plant functional diversity patterns across European plant orders.

    PubMed

    Ordonez, Alejandro; Svenning, Jens-Christian

    2017-02-23

    Current and historical environmental conditions are known to determine jointly contemporary species distributions and richness patterns. However, whether historical dynamics in species distributions and richness translate to functional diversity patterns remains, for the most part, unknown. The geographic patterns of plant functional space size (richness) and packing (dispersion) for six widely distributed orders of European angiosperms were estimated using atlas distribution data and trait information. Then the relative importance of late-Quaternary glacial-interglacial climate change and contemporary environmental factors (climate, productivity, and topography) as determinants of functional diversity of evaluated orders was assesed. Functional diversity patterns of all evaluated orders exhibited prominent glacial-interglacial climate change imprints, complementing the influence of contemporary environmental conditions. The importance of Quaternary glacial-interglacial climate change factors was comparable to that of contemporary environmental factors across evaluated orders. Therefore, high long-term paleoclimate variability has imposed consistent supplementary constraints on functional diversity of multiple plant groups, a legacy that may permeate to ecosystem functioning and resilience. These findings suggest that strong near-future anthropogenic climate change may elicit long-term functional disequilibria in plant functional diversity.

  6. Lognormal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of α-Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2010-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086

  7. Log Normal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of Alpha Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2008-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316

  8. Kaon quark distribution functions in the chiral constituent quark model

    NASA Astrophysics Data System (ADS)

    Watanabe, Akira; Sawada, Takahiro; Kao, Chung Wen

    2018-04-01

    We investigate the valence u and s ¯ quark distribution functions of the K+ meson, vK (u )(x ,Q2) and vK (s ¯)(x ,Q2), in the framework of the chiral constituent quark model. We judiciously choose the bare distributions at the initial scale to generate the dressed distributions at the higher scale, considering the meson cloud effects and the QCD evolution, which agree with the phenomenologically satisfactory valence quark distribution of the pion and the experimental data of the ratio vK (u )(x ,Q2)/vπ (u )(x ,Q2) . We show how the meson cloud effects affect the bare distribution functions in detail. We find that a smaller S U (3 ) flavor symmetry breaking effect is observed, compared with results of the preceding studies based on other approaches.

  9. Energy and enthalpy distribution functions for a few physical systems.

    PubMed

    Wu, K L; Wei, J H; Lai, S K; Okabe, Y

    2007-08-02

    The present work is devoted to extracting the energy or enthalpy distribution function of a physical system from the moments of the distribution using the maximum entropy method. This distribution theory has the salient traits that it utilizes only the experimental thermodynamic data. The calculated distribution functions provide invaluable insight into the state or phase behavior of the physical systems under study. As concrete evidence, we demonstrate the elegance of the distribution theory by studying first a test case of a two-dimensional six-state Potts model for which simulation results are available for comparison, then the biphasic behavior of the binary alloy Na-K whose excess heat capacity, experimentally observed to fall in a narrow temperature range, has yet to be clarified theoretically, and finally, the thermally induced state behavior of a collection of 16 proteins.

  10. Electron and ion distribution functions in magnetopause reconnection

    NASA Astrophysics Data System (ADS)

    Wang, S.; Chen, L. J.; Bessho, N.; Hesse, M.; Kistler, L. M.; Torbert, R. B.; Mouikis, C.; Pollock, C. J.

    2015-12-01

    We investigate electron and ion velocity distribution functions in dayside magnetopause reconnection events observed by the Cluster and MMS spacecraft. The goal is to build a spatial map of electron and ion distribution features to enable the indication of the spacecraft location in the reconnection structure, and to understand plasma energization processes. Distribution functions, together with electromagnetic field structures, plasma densities, and bulk velocities, are organized and compared with particle-in-cell simulation results to indicate the proximities to the reconnection X-line. Anisotropic features in the distributions of magnetospheric- and magnetosheath- origin electrons at different locations in the reconnection inflow and exhaust are identified. In particular, parallel electron heating is observed in both the magnetosheath and magnetosphere inflow regions. Possible effects of the guide field strength, waves, and upstream density and temperature asymmetries on the distribution features will be discussed.

  11. Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chhaiba, Hassan, E-mail: chhaiba.hassan@gmail.com; Demni, Nizar, E-mail: nizar.demni@univ-rennes1.fr; Mouayn, Zouhair, E-mail: mouayn@fstbm.ac.ma

    2016-07-15

    To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering themore » total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.« less

  12. On the Kernel function of the integral equation relating lift and downwash distributions of oscillating wings in supersonic flow

    NASA Technical Reports Server (NTRS)

    Watkins, Charles E; Berman, Julian H

    1956-01-01

    This report treats the Kernel function of the integral equation that relates a known or prescribed downwash distribution to an unknown lift distribution for harmonically oscillating wings in supersonic flow. The treatment is essentially an extension to supersonic flow of the treatment given in NACA report 1234 for subsonic flow. For the supersonic case the Kernel function is derived by use of a suitable form of acoustic doublet potential which employs a cutoff or Heaviside unit function. The Kernel functions are reduced to forms that can be accurately evaluated by considering the functions in two parts: a part in which the singularities are isolated and analytically expressed, and a nonsingular part which can be tabulated.

  13. 7 CFR 1717.851 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...

  14. 7 CFR 1717.851 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...

  15. 7 CFR 1717.851 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...

  16. 7 CFR 1717.851 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...

  17. 7 CFR 1717.851 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...

  18. Experimental verification of the shape of the excitation depth distribution function for AES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tougaard, S.; Jablonski, A.; Institute of Physical Chemistry, Polish Academy of Sciences, ul. Kasprzaka 44/52, 01-224 Warsaw

    2011-09-15

    In the common formalism of AES, it is assumed that the in-depth distribution of ionizations is uniform. There are experimental indications that this assumption may not be true for certain primary electron energies and solids. The term ''excitation depth distribution function'' (EXDDF) has been introduced to describe the distribution of ionizations at energies used in AES. This function is conceptually equivalent to the Phi-rho-z function of electron microprobe analysis (EPMA). There are, however, experimental difficulties to determine this function in particular for energies below {approx} 10 keV. In the present paper, we investigate the possibility of determining the shape ofmore » the EXDDF from the background of inelastically scattered electrons on the low energy side of the Auger electron features in the electron energy spectra. The experimentally determined EXDDFs are compared with the EXDDFs determined from Monte Carlo simulations of electron trajectories in solids. It is found that this technique is useful for the experimental determination of the EXDDF function.« less

  19. Detecting background changes in environments with dynamic foreground by separating probability distribution function mixtures using Pearson's method of moments

    NASA Astrophysics Data System (ADS)

    Jenkins, Colleen; Jordan, Jay; Carlson, Jeff

    2007-02-01

    This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.

  20. Automatic transfer function design for medical visualization using visibility distributions and projective color mapping.

    PubMed

    Cai, Lile; Tay, Wei-Liang; Nguyen, Binh P; Chui, Chee-Kong; Ong, Sim-Heng

    2013-01-01

    Transfer functions play a key role in volume rendering of medical data, but transfer function manipulation is unintuitive and can be time-consuming; achieving an optimal visualization of patient anatomy or pathology is difficult. To overcome this problem, we present a system for automatic transfer function design based on visibility distribution and projective color mapping. Instead of assigning opacity directly based on voxel intensity and gradient magnitude, the opacity transfer function is automatically derived by matching the observed visibility distribution to a target visibility distribution. An automatic color assignment scheme based on projective mapping is proposed to assign colors that allow for the visual discrimination of different structures, while also reflecting the degree of similarity between them. When our method was tested on several medical volumetric datasets, the key structures within the volume were clearly visualized with minimal user intervention. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Derivation of Hunt equation for suspension distribution using Shannon entropy theory

    NASA Astrophysics Data System (ADS)

    Kundu, Snehasis

    2017-12-01

    In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.

  2. Bernstein-Greene-Kruskal theory of electron holes in superthermal space plasma

    NASA Astrophysics Data System (ADS)

    Aravindakshan, Harikrishnan; Kakad, Amar; Kakad, Bharati

    2018-05-01

    Several spacecraft missions have observed electron holes (EHs) in Earth's and other planetary magnetospheres. These EHs are modeled with the stationary solutions of Vlasov-Poisson equations, obtained by adopting the Bernstein-Greene-Kruskal (BGK) approach. Through the literature survey, we find that the BGK EHs are modelled by using either thermal distribution function or any statistical distribution derived from particular spacecraft observations. However, Maxwell distributions are quite rare in space plasmas; instead, most of these plasmas are superthermal in nature and generally described by kappa distribution. We have developed a one-dimensional BGK model of EHs for space plasma that follows superthermal kappa distribution. The analytical solution of trapped electron distribution function for such plasmas is derived. The trapped particle distribution function in plasma following kappa distribution is found to be steeper and denser as compared to that for Maxwellian distribution. The width-amplitude relation of perturbation for superthermal plasma is derived and allowed regions of stable BGK solutions are obtained. We find that the stable BGK solutions are better supported by superthermal plasmas compared to that of thermal plasmas for small amplitude perturbations.

  3. Reconfiguration in Robust Distributed Real-Time Systems Based on Global Checkpoints

    DTIC Science & Technology

    1991-12-01

    achieved by utilizing distributed systems in which a single application program executes on multiple processors, connected to a network. The distributed...single application program executes on multiple proces- sors, connected to a network. The distributed nature of such systems make it possible to ...resident at every node. How - ever, the responsibility for execution of a particular function is assigned to only one node in this framework. This function

  4. Reliable Function Approximation and Estimation

    DTIC Science & Technology

    2016-08-16

    AUSTIN , TX 78712 08/16/2016 Final Report DISTRIBUTION A: Distribution approved for public release. Air Force Research Laboratory AF Office Of Scientific...UNIVERSITY OF TEXAS AT AUSTIN 101 EAST 27TH STREET STE 4308 AUSTIN , TX 78712 DISTRIBUTION A: Distribution approved for public release. INSTRUCTIONS...AFRL-AFOSR-VA-TR-2016-0293 Reliable Function Approximation and Estimation Rachel Ward UNIVERSITY OF TEXAS AT AUSTIN 101 EAST 27TH STREET STE 4308

  5. About normal distribution on SO(3) group in texture analysis

    NASA Astrophysics Data System (ADS)

    Savyolova, T. I.; Filatov, S. V.

    2017-12-01

    This article studies and compares different normal distributions (NDs) on SO(3) group, which are used in texture analysis. Those NDs are: Fisher normal distribution (FND), Bunge normal distribution (BND), central normal distribution (CND) and wrapped normal distribution (WND). All of the previously mentioned NDs are central functions on SO(3) group. CND is a subcase for normal CLT-motivated distributions on SO(3) (CLT here is Parthasarathy’s central limit theorem). WND is motivated by CLT in R 3 and mapped to SO(3) group. A Monte Carlo method for modeling normally distributed values was studied for both CND and WND. All of the NDs mentioned above are used for modeling different components of crystallites orientation distribution function in texture analysis.

  6. PIC simulations of a three component plasma described by Kappa distribution functions as observed in Saturn's magnetosphere

    NASA Astrophysics Data System (ADS)

    Barbosa, Marcos; Alves, Maria Virginia; Simões Junior, Fernando

    2016-04-01

    In plasmas out of thermodynamic equilibrium the particle velocity distribution can be described by the so called Kappa distribution. These velocity distribution functions are a generalization of the Maxwellian distribution. Since 1960, Kappa velocity distributions were observed in several regions of interplanetary space and astrophysical plasmas. Using KEMPO1 particle simulation code, modified to introduce Kappa distribution functions as initial conditions for particle velocities, the normal modes of propagation were analyzed in a plasma containing two species of electrons with different temperatures and densities and ions as a third specie.This type of plasma is usually found in magnetospheres such as in Saturn. Numerical solutions for the dispersion relation for such a plasma predict the presence of an electron-acoustic mode, besides the Langmuir and ion-acoustic modes. In the presence of an ambient magnetic field, the perpendicular propagation (Bernstein mode) also changes, as compared to a Maxwellian plasma, due to the Kappa distribution function. Here results for simulations with and without external magnetic field are presented. The parameters for the initial conditions in the simulations were obtained from the Cassini spacecraft data. Simulation results are compared with numerical solutions of the dispersion relation obtained in the literature and they are in good agreement.

  7. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  8. Determination of Anisotropic Ion Velocity Distribution Function in Intrinsic Gas Plasma. Theory.

    NASA Astrophysics Data System (ADS)

    Mustafaev, A.; Grabovskiy, A.; Murillo, O.; Soukhomlinov, V.

    2018-02-01

    The first seven coefficients of the expansion of the energy and angular distribution functions in Legendre polynomials for Hg+ ions in Hg vapor plasma with the parameter E/P ≈ 400 V/(cm Torr) are measured for the first time using a planar one-sided probe. The analytic solution to the Boltzmann kinetic equation for ions in the plasma of their parent gas is obtained in the conditions when the resonant charge exchange is the predominant process, and ions acquire on their mean free path a velocity much higher than the characteristic velocity of thermal motion of atoms. The presence of an ambipolar field of an arbitrary strength is taken into account. It is shown that the ion velocity distribution function is determined by two parameters and differs substantially from the Maxwellian distribution. Comparison of the results of calculation of the drift velocity of He+ ions in He, Ar+ in Ar, and Hg+ in Hg with the available experimental data shows their conformity. The results of the calculation of the ion distribution function correctly describe the experimental data obtained from its measurement. Analysis of the result shows that in spite of the presence of the strong field, the ion velocity distribution functions are isotropic for ion velocities lower than the average thermal velocity of atoms. With increasing ion velocity, the distribution becomes more and more extended in the direction of the electric field.

  9. A mechanism producing power law etc. distributions

    NASA Astrophysics Data System (ADS)

    Li, Heling; Shen, Hongjun; Yang, Bin

    2017-07-01

    Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.

  10. Pion and kaon valence-quark parton quasidistributions

    NASA Astrophysics Data System (ADS)

    Xu, Shu-Sheng; Chang, Lei; Roberts, Craig D.; Zong, Hong-Shi

    2018-05-01

    Algebraic Ansätze for the Poincaré-covariant Bethe-Salpeter wave functions of the pion and kaon are used to calculate their light-front wave functions, parton distribution amplitudes, parton quasidistribution amplitudes, valence parton distribution functions, and parton quasidistribution functions (PqDFs). The light-front wave functions are broad, concave functions, and the scale of flavor-symmetry violation in the kaon is roughly 15%, being set by the ratio of emergent masses in the s - and u -quark sectors. Parton quasidistribution amplitudes computed with longitudinal momentum Pz=1.75 GeV provide a semiquantitatively accurate representation of the objective parton distribution amplitude, but even with Pz=3 GeV , they cannot provide information about this amplitude's end point behavior. On the valence-quark domain, similar outcomes characterize PqDFs. In this connection, however, the ratio of kaon-to-pion u -quark PqDFs is found to provide a good approximation to the true parton distribution function ratio on 0.4 ≲x ≲0.8 , suggesting that with existing resources computations of ratios of parton quasidistributions can yield results that support empirical comparison.

  11. Calculation of the transverse parton distribution functions at next-to-next-to-leading order

    NASA Astrophysics Data System (ADS)

    Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin

    2014-06-01

    We describe the perturbative calculation of the transverse parton distribution functions in all partonic channels up to next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate the cancellation of light-cone divergences and show that universal process-independent transverse parton distribution functions can be obtained through a refactorization. Our results serve as the first explicit higher-order calculation of these functions starting from first principles, and can be used to perform next-to-next-to-next-to-leading logarithmic q T resummation for a large class of processes at hadron colliders.

  12. Function Allocation in a Robust Distributed Real-Time Environment

    DTIC Science & Technology

    1991-12-01

    fundamental characteristic of a distributed system is its ability to map individual logical functions of an application program onto many physical nodes... how much of a node’s processor time is scheduled for function processing. IMC is the function- to -function communication required to facilitate...indicator of how much excess processor time a node has. The reconfiguration algorithms use these variables to determine the most appropriate node(s) to

  13. Full Waveform Inversion Using Student's t Distribution: a Numerical Study for Elastic Waveform Inversion and Simultaneous-Source Method

    NASA Astrophysics Data System (ADS)

    Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki

    2015-06-01

    Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.

  14. Differential subcellular distribution of ion channels and the diversity of neuronal function.

    PubMed

    Nusser, Zoltan

    2012-06-01

    Following the astonishing molecular diversity of voltage-gated ion channels that was revealed in the past few decades, the ion channel repertoire expressed by neurons has been implicated as the major factor governing their functional heterogeneity. Although the molecular structure of ion channels is a key determinant of their biophysical properties, their subcellular distribution and densities on the surface of nerve cells are just as important for fulfilling functional requirements. Recent results obtained with high resolution quantitative localization techniques revealed complex, subcellular compartment-specific distribution patterns of distinct ion channels. Here I suggest that within a given neuron type every ion channel has a unique cell surface distribution pattern, with the functional consequence that this dramatically increases the computational power of nerve cells. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Opacity probability distribution functions for electronic systems of CN and C2 molecules including their stellar isotopic forms.

    NASA Technical Reports Server (NTRS)

    Querci, F.; Kunde, V. G.; Querci, M.

    1971-01-01

    The basis and techniques are presented for generating opacity probability distribution functions for the CN molecule (red and violet systems) and the C2 molecule (Swan, Phillips, Ballik-Ramsay systems), two of the more important diatomic molecules in the spectra of carbon stars, with a view to including these distribution functions in equilibrium model atmosphere calculations. Comparisons to the CO molecule are also shown. T he computation of the monochromatic absorption coefficient uses the most recent molecular data with revision of the oscillator strengths for some of the band systems. The total molecular stellar mass absorption coefficient is established through fifteen equations of molecular dissociation equilibrium to relate the distribution functions to each other on a per gram of stellar material basis.

  16. Dominant role of many-body effects on the carrier distribution function of quantum dot lasers

    NASA Astrophysics Data System (ADS)

    Peyvast, Negin; Zhou, Kejia; Hogg, Richard A.; Childs, David T. D.

    2016-03-01

    The effects of free-carrier-induced shift and broadening on the carrier distribution function are studied considering different extreme cases for carrier statistics (Fermi-Dirac and random carrier distributions) as well as quantum dot (QD) ensemble inhomogeneity and state separation using a Monte Carlo model. Using this model, we show that the dominant factor determining the carrier distribution function is the free carrier effects and not the choice of carrier statistics. By using empirical values of the free-carrier-induced shift and broadening, good agreement is obtained with experimental data of QD materials obtained under electrical injection for both extreme cases of carrier statistics.

  17. An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Carter, M. C.; Madison, M. W.

    1973-01-01

    The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.

  18. Fluorinated Silsesquioxanes: Structure, Solubility, and Wetting (Briefing charts)

    DTIC Science & Technology

    2015-08-01

    3DISTRIBUTION A. Approved for public release; distribution unlimited. Non-wetting surfaces Superhydrophilic Hydrophilic Hydrophobic Superhydrophobic 24.00...Emulsions Superhydrophobic / Superoleophilic Science, 2007 10DISTRIBUTION A. Approved for public release; distribution unlimited. Not all F-POSS are...functional dichlorosilane to add any desired functionality • Platform for molecules with  superhydrophobic  or oleophobic properties • A variety of fluoroPOSS

  19. Uncertainty of Polarized Parton Distributions

    NASA Astrophysics Data System (ADS)

    Hirai, M.; Goto, Y.; Horaguchi, T.; Kobayashi, H.; Kumano, S.; Miyama, M.; Saito, N.; Shibata, T.-A.

    Polarized parton distribution functions are determined by a χ2 analysis of polarized deep inelastic experimental data. In this paper, uncertainty of obtained distribution functions is investigated by a Hessian method. We find that the uncertainty of the polarized gluon distribution is fairly large. Then, we estimate the gluon uncertainty by including the fake data which are generated from prompt photon process at RHIC. We observed that the uncertainty could be reduced with these data.

  20. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  1. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  2. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  3. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  4. [Data distribution and transformation in population based sampling survey of viral load in HIV positive men who have sex with men in China].

    PubMed

    Dou, Z; Chen, J; Jiang, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To understand the distribution of population viral load (PVL) data in HIV infected men who have sex with men (MSM), fit distribution function and explore the appropriate estimating parameter of PVL. Methods: The detection limit of viral load (VL) was ≤ 50 copies/ml. Box-Cox transformation and normal distribution tests were used to describe the general distribution characteristics of the original and transformed data of PVL, then the stable distribution function was fitted with test of goodness of fit. Results: The original PVL data fitted a skewed distribution with the variation coefficient of 622.24%, and had a multimodal distribution after Box-Cox transformation with optimal parameter ( λ ) of-0.11. The distribution of PVL data over the detection limit was skewed and heavy tailed when transformed by Box-Cox with optimal λ =0. By fitting the distribution function of the transformed data over the detection limit, it matched the stable distribution (SD) function ( α =1.70, β =-1.00, γ =0.78, δ =4.03). Conclusions: The original PVL data had some censored data below the detection limit, and the data over the detection limit had abnormal distribution with large degree of variation. When proportion of the censored data was large, it was inappropriate to use half-value of detection limit to replace the censored ones. The log-transformed data over the detection limit fitted the SD. The median ( M ) and inter-quartile ranger ( IQR ) of log-transformed data can be used to describe the centralized tendency and dispersion tendency of the data over the detection limit.

  5. The class of L ∩ D and its application to renewal reward process

    NASA Astrophysics Data System (ADS)

    Kamışlık, Aslı Bektaş; Kesemen, Tülay; Khaniyev, Tahir

    2018-01-01

    The class of L ∩ D is generated by intersection of two important subclasses of heavy tailed distributions: The long tailed distributions and dominated varying distributions. This class itself is also an important member of heavy tailed distributions and has some principal application areas especially in renewal, renewal reward and random walk processes. The aim of this study is to observe some well and less known results on renewal functions generated by the class of L ∩ D and apply them into a special renewal reward process which is known in the literature a semi Markovian inventory model of type (s, S). Especially we focused on Pareto distribution which belongs to the L ∩ D subclass of heavy tailed distributions. As a first step we obtained asymptotic results for renewal function generated by Pareto distribution from the class of L ∩ D using some well-known results by Embrechts and Omey [1]. Then we applied the results we obtained for Pareto distribution to renewal reward processes. As an application we investigate inventory model of type (s, S) when demands have Pareto distribution from the class of L ∩ D. We obtained asymptotic expansion for ergodic distribution function and finally we reached asymptotic expansion for nth order moments of distribution of this process.

  6. Searching for the best thermoelectrics through the optimization of transport distribution function

    NASA Astrophysics Data System (ADS)

    Fan, Zheyong; Wang, Hui-Qiong; Zheng, Jin-Cheng

    2011-04-01

    The thermoelectric performance of materials is dependent on the interplay or competition among three key components, the electrical conductivity, thermopower, and thermal conductivity, which can be written as integrals of a single function, the transport distribution function (TDF). Mahan and Sofo [Proc. Natl. Acad. Sci. USA 93, 7436 (1996)] found that, mathematically, the thermoelectric properties could be maximized by a delta-shaped transport distribution, which was associated with a narrow distribution of the energy of the electrons participating in the transport process. In this work, we revisited the shape effect of TDF on thermoelectric figure of merit. It is confirmed both heuristically and numerically that among all the normalized TDF the Dirac delta function leads to the largest thermoelectric figure of merit. Whereas, for the case of TDF being bounded, a rectangular-shape distribution is instead found to be the most favorable one, which could be achieved through nanoroute. Our results also indicate that high thermoelectric figure of merit is associated with appropriate violations of the Wiedemann-Franz law.

  7. How Bright is the Proton? A Precise Determination of the Photon Parton Distribution Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manohar, Aneesh; Nason, Paolo; Salam, Gavin P.

    2016-12-09

    It has become apparent in recent years that it is important, notably for a range of physics studies at the Large Hadron Collider, to have accurate knowledge on the distribution of photons in the proton. We show how the photon parton distribution function (PDF) can be determined in a model-independent manner, using electron-proton (ep) scattering data, in effect viewing the ep → e + X process as an electron scattering off the photon field of the proton. To this end, we consider an imaginary, beyond the Standard Model process with a flavor changing photon-lepton vertex. We write its cross sectionmore » in two ways: one in terms of proton structure functions, the other in terms of a photon distribution. Requiring their equivalence yields the photon distribution as an integral over proton structure functions. As a result of the good precision of ep data, we constrain the photon PDF at the level of 1%–2% over a wide range of momentum fractions.« less

  8. How Bright is the Proton? A Precise Determination of the Photon Parton Distribution Function.

    PubMed

    Manohar, Aneesh; Nason, Paolo; Salam, Gavin P; Zanderighi, Giulia

    2016-12-09

    It has become apparent in recent years that it is important, notably for a range of physics studies at the Large Hadron Collider, to have accurate knowledge on the distribution of photons in the proton. We show how the photon parton distribution function (PDF) can be determined in a model-independent manner, using electron-proton (ep) scattering data, in effect viewing the ep→e+X process as an electron scattering off the photon field of the proton. To this end, we consider an imaginary, beyond the Standard Model process with a flavor changing photon-lepton vertex. We write its cross section in two ways: one in terms of proton structure functions, the other in terms of a photon distribution. Requiring their equivalence yields the photon distribution as an integral over proton structure functions. As a result of the good precision of ep data, we constrain the photon PDF at the level of 1%-2% over a wide range of momentum fractions.

  9. Preisach modeling of temperature-dependent ferroelectric response of piezoceramics at sub-switching regime

    NASA Astrophysics Data System (ADS)

    Ochoa, Diego Alejandro; García, Jose Eduardo

    2016-04-01

    The Preisach model is a classical method for describing nonlinear behavior in hysteretic systems. According to this model, a hysteretic system contains a collection of simple bistable units which are characterized by an internal field and a coercive field. This set of bistable units exhibits a statistical distribution that depends on these fields as parameters. Thus, nonlinear response depends on the specific distribution function associated with the material. This model is satisfactorily used in this work to describe the temperature-dependent ferroelectric response in PZT- and KNN-based piezoceramics. A distribution function expanded in Maclaurin series considering only the first terms in the internal field and the coercive field is proposed. Changes in coefficient relations of a single distribution function allow us to explain the complex temperature dependence of hard piezoceramic behavior. A similar analysis based on the same form of the distribution function shows that the KNL-NTS properties soften around its orthorhombic to tetragonal phase transition.

  10. Prediction of sound transmission loss through multilayered panels by using Gaussian distribution of directional incident energy

    PubMed

    Kang; Ih; Kim; Kim

    2000-03-01

    In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.

  11. Genome-wide survey of DNA-binding proteins in Arabidopsis thaliana: analysis of distribution and functions.

    PubMed

    Malhotra, Sony; Sowdhamini, Ramanathan

    2013-08-01

    The interaction of proteins with their respective DNA targets is known to control many high-fidelity cellular processes. Performing a comprehensive survey of the sequenced genomes for DNA-binding proteins (DBPs) will help in understanding their distribution and the associated functions in a particular genome. Availability of fully sequenced genome of Arabidopsis thaliana enables the review of distribution of DBPs in this model plant genome. We used profiles of both structure and sequence-based DNA-binding families, derived from PDB and PFam databases, to perform the survey. This resulted in 4471 proteins, identified as DNA-binding in Arabidopsis genome, which are distributed across 300 different PFam families. Apart from several plant-specific DNA-binding families, certain RING fingers and leucine zippers also had high representation. Our search protocol helped to assign DNA-binding property to several proteins that were previously marked as unknown, putative or hypothetical in function. The distribution of Arabidopsis genes having a role in plant DNA repair were particularly studied and noted for their functional mapping. The functions observed to be overrepresented in the plant genome harbour DNA-3-methyladenine glycosylase activity, alkylbase DNA N-glycosylase activity and DNA-(apurinic or apyrimidinic site) lyase activity, suggesting their role in specialized functions such as gene regulation and DNA repair.

  12. Electrochemical reactions in fluoride-ion batteries: mechanistic insights from pair distribution function analysis

    DOE PAGES

    Grenier, Antonin; Porras-Gutierrez, Ana-Gabriela; Groult, Henri; ...

    2017-07-05

    Detailed analysis of electrochemical reactions occurring in rechargeable Fluoride-Ion Batteries (FIBs) is provided by means of synchrotron X-ray diffraction (XRD) and Pair Distribution Function (PDF) analysis.

  13. Analysis of the proton longitudinal structure function from the gluon distribution function

    NASA Astrophysics Data System (ADS)

    Boroun, G. R.; Rezaei, B.

    2012-11-01

    We make a critical, next-to-leading order, study of the relationship between the longitudinal structure function F L and the gluon distribution proposed in Cooper-Sarkar et al. (Z. Phys. C 39:281, 1988; Acta Phys. Pol. B 34:2911 2003), which is frequently used to extract the gluon distribution from the proton longitudinal structure function at small x. The gluon density is obtained by expanding at particular choices of the point of expansion and compared with the hard Pomeron behavior for the gluon density. Comparisons with H1 data are made and predictions for the proposed best approach are also provided.

  14. Multiplicity distributions of charged hadrons in vp and charged current interactions

    NASA Astrophysics Data System (ADS)

    Jones, G. T.; Jones, R. W. L.; Kennedy, B. W.; Morrison, D. R. O.; Mobayyen, M. M.; Wainstein, S.; Aderholz, M.; Hantke, D.; Katz, U. F.; Kern, J.; Schmitz, N.; Wittek, W.; Borner, H. P.; Myatt, G.; Radojicic, D.; Burke, S.

    1992-03-01

    Using data on vp andbar vp charged current interactions from a bubble chamber experiment with BEBC at CERN, the multiplicity distributions of charged hadrons are investigated. The analysis is based on ˜20000 events with incident v and ˜10000 events with incidentbar v. The invariant mass W of the total hadronic system ranges from 3 GeV to ˜14 GeV. The experimental multiplicity distributions are fitted by the binomial function (for different intervals of W and in different intervals of the rapidity y), by the Levy function and the lognormal function. All three parametrizations give acceptable values for X 2. For fixed W, forward and backward multiplicities are found to be uncorrelated. The normalized moments of the charged multiplicity distributions are measured as a function of W. They show a violation of KNO scaling.

  15. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  16. 3D ion velocity distribution function measurement in an electric thruster using laser induced fluorescence tomography

    NASA Astrophysics Data System (ADS)

    Elias, P. Q.; Jarrige, J.; Cucchetti, E.; Cannat, F.; Packan, D.

    2017-09-01

    Measuring the full ion velocity distribution function (IVDF) by non-intrusive techniques can improve our understanding of the ionization processes and beam dynamics at work in electric thrusters. In this paper, a Laser-Induced Fluorescence (LIF) tomographic reconstruction technique is applied to the measurement of the IVDF in the plume of a miniature Hall effect thruster. A setup is developed to move the laser axis along two rotation axes around the measurement volume. The fluorescence spectra taken from different viewing angles are combined using a tomographic reconstruction algorithm to build the complete 3D (in phase space) time-averaged distribution function. For the first time, this technique is used in the plume of a miniature Hall effect thruster to measure the full distribution function of the xenon ions. Two examples of reconstructions are provided, in front of the thruster nose-cone and in front of the anode channel. The reconstruction reveals the features of the ion beam, in particular on the thruster axis where a toroidal distribution function is observed. These findings are consistent with the thruster shape and operation. This technique, which can be used with other LIF schemes, could be helpful in revealing the details of the ion production regions and the beam dynamics. Using a more powerful laser source, the current implementation of the technique could be improved to reduce the measurement time and also to reconstruct the temporal evolution of the distribution function.

  17. Particle Size Distributions in Atmospheric Clouds

    NASA Technical Reports Server (NTRS)

    Paoli, Roberto; Shariff, Karim

    2003-01-01

    In this note, we derive a transport equation for a spatially integrated distribution function of particles size that is suitable for sparse particle systems, such as in atmospheric clouds. This is done by integrating a Boltzmann equation for a (local) distribution function over an arbitrary but finite volume. A methodology for evolving the moments of the integrated distribution is presented. These moments can be either tracked for a finite number of discrete populations ('clusters') or treated as continuum variables.

  18. Grassmann phase space theory and the Jaynes-Cummings model

    NASA Astrophysics Data System (ADS)

    Dalton, B. J.; Garraway, B. M.; Jeffers, J.; Barnett, S. M.

    2013-07-01

    The Jaynes-Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherent state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes-Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker-Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker-Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes-Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker-Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker-Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions-that are also equivalent to the canonical Grassmann distribution function-to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum-atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes-Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum-atom optics.

  19. Determine Neuronal Tuning Curves by Exploring Optimum Firing Rate Distribution for Information Efficiency

    PubMed Central

    Han, Fang; Wang, Zhijie; Fan, Hong

    2017-01-01

    This paper proposed a new method to determine the neuronal tuning curves for maximum information efficiency by computing the optimum firing rate distribution. Firstly, we proposed a general definition for the information efficiency, which is relevant to mutual information and neuronal energy consumption. The energy consumption is composed of two parts: neuronal basic energy consumption and neuronal spike emission energy consumption. A parameter to model the relative importance of energy consumption is introduced in the definition of the information efficiency. Then, we designed a combination of exponential functions to describe the optimum firing rate distribution based on the analysis of the dependency of the mutual information and the energy consumption on the shape of the functions of the firing rate distributions. Furthermore, we developed a rapid algorithm to search the parameter values of the optimum firing rate distribution function. Finally, we found with the rapid algorithm that a combination of two different exponential functions with two free parameters can describe the optimum firing rate distribution accurately. We also found that if the energy consumption is relatively unimportant (important) compared to the mutual information or the neuronal basic energy consumption is relatively large (small), the curve of the optimum firing rate distribution will be relatively flat (steep), and the corresponding optimum tuning curve exhibits a form of sigmoid if the stimuli distribution is normal. PMID:28270760

  20. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  1. Total recall in distributive associative memories

    NASA Technical Reports Server (NTRS)

    Danforth, Douglas G.

    1991-01-01

    Iterative error correction of asymptotically large associative memories is equivalent to a one-step learning rule. This rule is the inverse of the activation function of the memory. Spectral representations of nonlinear activation functions are used to obtain the inverse in closed form for Sparse Distributed Memory, Selected-Coordinate Design, and Radial Basis Functions.

  2. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    PubMed

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Advanced Inverter Functions and Communication Protocols for Distribution Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Palmintier, Bryan; Baggu, Murali

    2016-05-05

    This paper aims at identifying the advanced features required by distribution management systems (DMS) service providers to bring inverter-connected distributed energy resources into use as an intelligent grid resource. This work explores the standard functions needed in the future DMS for enterprise integration of distributed energy resources (DER). The important DMS functionalities such as DER management in aggregate groups, including the discovery of capabilities, status monitoring, and dispatch of real and reactive power are addressed in this paper. It is intended to provide the industry with a point of reference for DER integration with other utility applications and to providemore » guidance to research and standards development organizations.« less

  4. Differential memory in the earth's magnetotail

    NASA Technical Reports Server (NTRS)

    Burkhart, G. R.; Chen, J.

    1991-01-01

    The process of 'differential memory' in the earth's magnetotail is studied in the framework of the modified Harris magnetotail geometry. It is verified that differential memory can generate non-Maxwellian features in the modified Harris field model. The time scales and the potentially observable distribution functions associated with the process of differential memory are investigated, and it is shown that non-Maxwelllian distributions can evolve as a test particle response to distribution function boundary conditions in a Harris field magnetotail model. The non-Maxwellian features which arise from distribution function mapping have definite time scales associated with them, which are generally shorter than the earthward convection time scale but longer than the typical Alfven crossing time.

  5. Peculiarities of the momentum distribution functions of strongly correlated charged fermions

    NASA Astrophysics Data System (ADS)

    Larkin, A. S.; Filinov, V. S.; Fortov, V. E.

    2018-01-01

    New numerical version of the Wigner approach to quantum thermodynamics of strongly coupled systems of particles has been developed for extreme conditions, when analytical approximations based on different kinds of perturbation theories cannot be applied. An explicit analytical expression of the Wigner function has been obtained in linear and harmonic approximations. Fermi statistical effects are accounted for by effective pair pseudopotential depending on coordinates, momenta and degeneracy parameter of particles and taking into account Pauli blocking of fermions. A new quantum Monte-Carlo method for calculations of average values of arbitrary quantum operators has been developed. Calculations of the momentum distribution functions and the pair correlation functions of degenerate ideal Fermi gas have been carried out for testing the developed approach. Comparison of the obtained momentum distribution functions of strongly correlated Coulomb systems with the Maxwell-Boltzmann and the Fermi distributions shows the significant influence of interparticle interaction both at small momenta and in high energy quantum ‘tails’.

  6. On the mass function of stars growing in a flocculent medium

    NASA Astrophysics Data System (ADS)

    Maschberger, Th.

    2013-12-01

    Stars form in regions of very inhomogeneous densities and may have chaotic orbital motions. This leads to a time variation of the accretion rate, which will spread the masses over some mass range. We investigate the mass distribution functions that arise from fluctuating accretion rates in non-linear accretion, ṁ ∝ mα. The distribution functions evolve in time and develop a power-law tail attached to a lognormal body, like in numerical simulations of star formation. Small fluctuations may be modelled by a Gaussian and develop a power-law tail ∝ m-α at the high-mass side for α > 1 and at the low-mass side for α < 1. Large fluctuations require that their distribution is strictly positive, for example, lognormal. For positive fluctuations the mass distribution function develops the power-law tail always at the high-mass hand side, independent of α larger or smaller than unity. Furthermore, we discuss Bondi-Hoyle accretion in a supersonically turbulent medium, the range of parameters for which non-linear stochastic growth could shape the stellar initial mass function, as well as the effects of a distribution of initial masses and growth times.

  7. MaxEnt alternatives to pearson family distributions

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie J.

    2012-05-01

    In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.

  8. Unraveling hadron structure with generalized parton distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrei Belitsky; Anatoly Radyushkin

    2004-10-01

    The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling andmore » QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.« less

  9. Theoretical study of sum-frequency vibrational spectroscopy on limonene surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Ren-Hui, E-mail: zrh@iccas.ac.cn; Liu, Hao; Jing, Yuan-Yuan

    2014-03-14

    By combining molecule dynamics (MD) simulation and quantum chemistry computation, we calculate the surface sum-frequency vibrational spectroscopy (SFVS) of R-limonene molecules at the gas-liquid interface for SSP, PPP, and SPS polarization combinations. The distributions of the Euler angles are obtained using MD simulation, the ψ-distribution is between isotropic and Gaussian. Instead of the MD distributions, different analytical distributions such as the δ-function, Gaussian and isotropic distributions are applied to simulate surface SFVS. We find that different distributions significantly affect the absolute SFVS intensity and also influence on relative SFVS intensity, and the δ-function distribution should be used with caution whenmore » the orientation distribution is broad. Furthermore, the reason that the SPS signal is weak in reflected arrangement is discussed.« less

  10. A New Lifetime Distribution with Bathtube and Unimodal Hazard Function

    NASA Astrophysics Data System (ADS)

    Barriga, Gladys D. C.; Louzada-Neto, Francisco; Cancho, Vicente G.

    2008-11-01

    In this paper we propose a new lifetime distribution which accommodate bathtub-shaped, unimodal, increasing and decreasing hazard function. Some special particular cases are derived, including the standard Weibull distribution. Maximum likelihood estimation is considered for estimate the tree parameters present in the model. The methodology is illustrated in a real data set on industrial devices on a lite test.

  11. New methods for estimating parameters of weibull functions to characterize future diameter distributions in forest stands

    Treesearch

    Quang V. Cao; Shanna M. McCarty

    2006-01-01

    Diameter distributions in a forest stand have been successfully characterized by use of the Weibull function. Of special interest are cases where parameters of a Weibull distribution that models a future stand are predicted, either directly or indirectly, from current stand density and dominant height. This study evaluated four methods of predicting the Weibull...

  12. Functionally relevant climate variables for arid lands: Aclimatic water deficit approach for modelling desert shrub distributions

    Treesearch

    Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers

    2015-01-01

    We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...

  13. Equations for estimating loblolly pine branch and foliage weight and surface area distributions

    Treesearch

    V. Clark Baldwin; Kelly D. Peterson; Harold E. Burkhatt; Ralph L. Amateis; Phillip M. Dougherty

    1996-01-01

    Equations to predict foliage weight and surface area, and their vertical and horizontal distributions, within the crowns of unthinned loblolly pine (Pinus tueduL.) trees are presented. A right-truncated Weibull function was used for describing vertical foliage distributions. This function ensures that all of the foliage located between the tree tip and the foliage base...

  14. Solutions to an advanced functional partial differential equation of the pantograph type

    PubMed Central

    Zaidi, Ali A.; Van Brunt, B.; Wake, G. C.

    2015-01-01

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained. PMID:26345391

  15. Solutions to an advanced functional partial differential equation of the pantograph type.

    PubMed

    Zaidi, Ali A; Van Brunt, B; Wake, G C

    2015-07-08

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained.

  16. A distributed planning concept for Space Station payload operations

    NASA Technical Reports Server (NTRS)

    Hagopian, Jeff; Maxwell, Theresa; Reed, Tracey

    1994-01-01

    The complex and diverse nature of the payload operations to be performed on the Space Station requires a robust and flexible planning approach. The planning approach for Space Station payload operations must support the phased development of the Space Station, as well as the geographically distributed users of the Space Station. To date, the planning approach for manned operations in space has been one of centralized planning to the n-th degree of detail. This approach, while valid for short duration flights, incurs high operations costs and is not conducive to long duration Space Station operations. The Space Station payload operations planning concept must reduce operations costs, accommodate phased station development, support distributed users, and provide flexibility. One way to meet these objectives is to distribute the planning functions across a hierarchy of payload planning organizations based on their particular needs and expertise. This paper presents a planning concept which satisfies all phases of the development of the Space Station (manned Shuttle flights, unmanned Station operations, and permanent manned operations), and the migration from centralized to distributed planning functions. Identified in this paper are the payload planning functions which can be distributed and the process by which these functions are performed.

  17. On the use of the KMR unintegrated parton distribution functions

    NASA Astrophysics Data System (ADS)

    Golec-Biernat, Krzysztof; Staśto, Anna M.

    2018-06-01

    We discuss the unintegrated parton distribution functions (UPDFs) introduced by Kimber, Martin and Ryskin (KMR), which are frequently used in phenomenological analyses of hard processes with transverse momenta of partons taken into account. We demonstrate numerically that the commonly used differential definition of the UPDFs leads to erroneous results for large transverse momenta. We identify the reason for that, being the use of the ordinary PDFs instead of the cutoff dependent distribution functions. We show that in phenomenological applications, the integral definition of the UPDFs with the ordinary PDFs can be used.

  18. Research in Functionally Distributed Computer Systems Development. Volume III. Evaluation of Conversion to a Back-End Data Base Management System.

    DTIC Science & Technology

    1976-03-01

    RESEARCH IN FUNCTIONALLY DISTRIBUTED COMPUTER SYSTEMS DEVEI.OPME--ETClU) MAR 76 P S FISHER, F MARYANSKI DAA629-76-6-0108 UNCLASSIFIED CS-76-08AN...RESEARCH IN FUNCTIONALLY !DISTRIBUTED COMPUTER SYSTEMS DEVELOPMENT Kansas State University Virgil Wallentine Principal Investigator Approved for public...reme; disiribution unlimited DTIC \\4JWE III ELECTi"U ~E V0AI. Ill ~1ONTAUG 2 0 1981&EV .IAIN LiSP4 F U.S. ARMY COMPUTER SYSTEMS COMMAND FT BELVOIR, VA

  19. An effective inversion algorithm for retrieving bimodal aerosol particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming

    2014-12-01

    The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m-1|⪡1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.

  20. TMDlib and TMDplotter: library and plotting tools for transverse-momentum-dependent parton distributions

    NASA Astrophysics Data System (ADS)

    Hautmann, F.; Jung, H.; Krämer, M.; Mulders, P. J.; Nocera, E. R.; Rogers, T. C.; Signori, A.

    2014-12-01

    Transverse-momentum-dependent distributions (TMDs) are extensions of collinear parton distributions and are important in high-energy physics from both theoretical and phenomenological points of view. In this manual we introduce the library , a tool to collect transverse-momentum-dependent parton distribution functions (TMD PDFs) and fragmentation functions (TMD FFs) together with an online plotting tool, TMDplotter. We provide a description of the program components and of the different physical frameworks the user can access via the available parameterisations.

  1. TMDlib and TMDplotter: library and plotting tools for transverse-momentum-dependent parton distributions.

    PubMed

    Hautmann, F; Jung, H; Krämer, M; Mulders, P J; Nocera, E R; Rogers, T C; Signori, A

    Transverse-momentum-dependent distributions (TMDs) are extensions of collinear parton distributions and are important in high-energy physics from both theoretical and phenomenological points of view. In this manual we introduce the library [Formula: see text], a tool to collect transverse-momentum-dependent parton distribution functions (TMD PDFs) and fragmentation functions (TMD FFs) together with an online plotting tool, TMDplotter. We provide a description of the program components and of the different physical frameworks the user can access via the available parameterisations.

  2. Measurement of the Spatial Distribution of the Spectral Response Variation in the Field of View of the ASD Spectrometer Input Optics

    DTIC Science & Technology

    2014-12-01

    development. It will be used for the measurement of the spectro-polarimetric BRDF (Bidirectional Reflectance Distribution function). For practical reasons...goniomètre est en développement. Il sera utilisé pour les mesures de BRDF (fonction de distribution de réflectance bidirectionnelle) spectrales et...by the independent measurements of the spectral and Bidirectional Reflectance Distribution Function ( BRDF ). The BRDF is the measurement of the

  3. Effects of dust size distribution on dust acoustic waves in two-dimensional unmagnetized dusty plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He Guangjun; Duan Wenshan; Tian Duoxiang

    2008-04-15

    For unmagnetized dusty plasma with many different dust grain species containing both hot isothermal electrons and ions, both the linear dispersion relation and the Kadomtsev-Petviashvili equation for small, but finite amplitude dust acoustic waves are obtained. The linear dispersion relation is investigated numerically. Furthermore, the variations of amplitude, width, and propagation velocity of the nonlinear solitary wave with an arbitrary dust size distribution function are studied as well. Moreover, both the power law distribution and the Gaussian distribution are approximately simulated by using appropriate arbitrary dust size distribution functions.

  4. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  5. Age distribution patterns of human gene families: divergent for Gene Ontology categories and concordant between different subcellular localizations.

    PubMed

    Liu, Gangbiao; Zou, Yangyun; Cheng, Qiqun; Zeng, Yanwu; Gu, Xun; Su, Zhixi

    2014-04-01

    The age distribution of gene duplication events within the human genome exhibits two waves of duplications along with an ancient component. However, because of functional constraint differences, genes in different functional categories might show dissimilar retention patterns after duplication. It is known that genes in some functional categories are highly duplicated in the early stage of vertebrate evolution. However, the correlations of the age distribution pattern of gene duplication between the different functional categories are still unknown. To investigate this issue, we developed a robust pipeline to date the gene duplication events in the human genome. We successfully estimated about three-quarters of the duplication events within the human genome, along with the age distribution pattern in each Gene Ontology (GO) slim category. We found that some GO slim categories show different distribution patterns when compared to the whole genome. Further hierarchical clustering of the GO slim functional categories enabled grouping into two main clusters. We found that human genes located in the duplicated copy number variant regions, whose duplicate genes have not been fixed in the human population, were mainly enriched in the groups with a high proportion of recently duplicated genes. Moreover, we used a phylogenetic tree-based method to date the age of duplications in three signaling-related gene superfamilies: transcription factors, protein kinases and G-protein coupled receptors. These superfamilies were expressed in different subcellular localizations. They showed a similar age distribution as the signaling-related GO slim categories. We also compared the differences between the age distributions of gene duplications in multiple subcellular localizations. We found that the distribution patterns of the major subcellular localizations were similar to that of the whole genome. This study revealed the whole picture of the evolution patterns of gene functional categories in the human genome.

  6. Functional Bregman Divergence and Bayesian Estimation of Distributions (Preprint)

    DTIC Science & Technology

    2008-01-01

    shows that if the set of possible minimizers A includes EPF [F ], then g∗ = EPF [F ] minimizes the expectation of any Bregman divergence. Note the theorem...probability distribution PF defined over the set M. Let A be a set of functions that includes EPF [F ] if it exists. Suppose the function g∗ minimizes...the expected Bregman divergence between the random function F and any function g ∈ A such that g∗ = arg inf g∈A EPF [dφ(F, g)]. Then, if g∗ exists

  7. Quality parameters analysis of optical imaging systems with enhanced focal depth using the Wigner distribution function

    PubMed

    Zalvidea; Colautti; Sicre

    2000-05-01

    An analysis of the Strehl ratio and the optical transfer function as imaging quality parameters of optical elements with enhanced focal length is carried out by employing the Wigner distribution function. To this end, we use four different pupil functions: a full circular aperture, a hyper-Gaussian aperture, a quartic phase plate, and a logarithmic phase mask. A comparison is performed between the quality parameters and test images formed by these pupil functions at different defocus distances.

  8. Models of violently relaxed galaxies

    NASA Astrophysics Data System (ADS)

    Merritt, David; Tremaine, Scott; Johnstone, Doug

    1989-02-01

    The properties of spherical self-gravitating models derived from two distribution functions that incorporate, in a crude way, the physics of violent relaxation are investigated. The first distribution function is identical to the one discussed by Stiavelli and Bertin (1985) except for a change in the sign of the 'temperature', i.e., e exp(-aE) to e exp(+aE). It is shown that these 'negative temperature' models provide a much better description of the end-state of violent relaxation than 'positive temperature' models. The second distribution function is similar to the first except for a different dependence on angular momentum. Both distribution functions yield single-parameter families of models with surface density profiles very similar to the R exp 1/4 law. Furthermore, the central concentration of models in both families increases monotonically with the velocity anisotropy, as expected in systems that formed through cold collapse.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitru, Adrian; Skokov, Vladimir

    The conventional and linearly polarized Weizsäcker-Williams gluon distributions at small x are defined from the two-point function of the gluon field in light-cone gauge. They appear in the cross section for dijet production in deep inelastic scattering at high energy. We determine these functions in the small-x limit from solutions of the JIMWLK evolution equations and show that they exhibit approximate geometric scaling. Also, we discuss the functional distributions of these WW gluon distributions over the JIMWLK ensemble at rapidity Y ~ 1/αs. These are determined by a 2d Liouville action for the logarithm of the covariant gauge function g2trmore » A+(q)A+(-q). For transverse momenta on the order of the saturation scale we observe large variations across configurations (evolution trajectories) of the linearly polarized distribution up to several times its average, and even to negative values.« less

  10. Reconstruction of the domain orientation distribution function of polycrystalline PZT ceramics using vector piezoresponse force microscopy.

    PubMed

    Kratzer, Markus; Lasnik, Michael; Röhrig, Sören; Teichert, Christian; Deluca, Marco

    2018-01-11

    Lead zirconate titanate (PZT) is one of the prominent materials used in polycrystalline piezoelectric devices. Since the ferroelectric domain orientation is the most important parameter affecting the electromechanical performance, analyzing the domain orientation distribution is of great importance for the development and understanding of improved piezoceramic devices. Here, vector piezoresponse force microscopy (vector-PFM) has been applied in order to reconstruct the ferroelectric domain orientation distribution function of polished sections of device-ready polycrystalline lead zirconate titanate (PZT) material. A measurement procedure and a computer program based on the software Mathematica have been developed to automatically evaluate the vector-PFM data for reconstructing the domain orientation function. The method is tested on differently in-plane and out-of-plane poled PZT samples, and the results reveal the expected domain patterns and allow determination of the polarization orientation distribution function at high accuracy.

  11. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  12. A Hermite-based lattice Boltzmann model with artificial viscosity for compressible viscous flows

    NASA Astrophysics Data System (ADS)

    Qiu, Ruofan; Chen, Rongqian; Zhu, Chenxiang; You, Yancheng

    2018-05-01

    A lattice Boltzmann model on Hermite basis for compressible viscous flows is presented in this paper. The model is developed in the framework of double-distribution-function approach, which has adjustable specific-heat ratio and Prandtl number. It contains a density distribution function for the flow field and a total energy distribution function for the temperature field. The equilibrium distribution function is determined by Hermite expansion, and the D3Q27 and D3Q39 three-dimensional (3D) discrete velocity models are used, in which the discrete velocity model can be replaced easily. Moreover, an artificial viscosity is introduced to enhance the model for capturing shock waves. The model is tested through several cases of compressible flows, including 3D supersonic viscous flows with boundary layer. The effect of artificial viscosity is estimated. Besides, D3Q27 and D3Q39 models are further compared in the present platform.

  13. A quark model analysis of the transversity distribution

    NASA Astrophysics Data System (ADS)

    Scopetta, Sergio; Vento, Vicente

    1998-04-01

    The feasibility of measuring chiral-odd parton distribution functions in polarized Drell-Yan and semi-inclusive experiments has renewed theoretical interest in their study. Models of hadron structure have proven successful in describing the gross features of the chiral-even structure functions. Similar expectations motivated our study of the transversity parton distributions in the Isgur-Karl and MIT bag models. We confirm, by performing a NLO calculation, the diverse low x behaviors of the transversity and spin structure functions at the experimental scale and show that it is fundamentally a consequence of the different behaviors under evolution of these functions. The inequalities of Soffer establish constraints between data and model calculations of the chiral-odd transversity function. The approximate compatibility of our model calculations with these constraints confers credibility to our estimates.

  14. The inclusion of capillary distribution in the adiabatic tissue homogeneity model of blood flow

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zeman, V.; Darko, J.; Lee, T.-Y.; Milosevic, M. F.; Haider, M.; Warde, P.; Yeung, I. W. T.

    2001-05-01

    We have developed a non-invasive imaging tracer kinetic model for blood flow which takes into account the distribution of capillaries in tissue. Each individual capillary is assumed to follow the adiabatic tissue homogeneity model. The main strength of our new model is in its ability to quantify the functional distribution of capillaries by the standard deviation in the time taken by blood to pass through the tissue. We have applied our model to the human prostate and have tested two different types of distribution functions. Both distribution functions yielded very similar predictions for the various model parameters, and in particular for the standard deviation in transit time. Our motivation for developing this model is the fact that the capillary distribution in cancerous tissue is drastically different from in normal tissue. We believe that there is great potential for our model to be used as a prognostic tool in cancer treatment. For example, an accurate knowledge of the distribution in transit times might result in an accurate estimate of the degree of tumour hypoxia, which is crucial to the success of radiation therapy.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.

    Light-Front Quantization – Dirac’s “Front Form” – provides a physical, frame-independent formalism for hadron dynamics and structure. Observables such as structure functions, transverse momentum distributions, and distribution amplitudes are defined from the hadronic LFWFs. One obtains new insights into the hadronic mass scale, the hadronic spectrum, and the functional form of the QCD running coupling in the nonperturbative domain using light-front holography. In addition, superconformal algebra leads to remarkable supersymmetric relations between mesons and baryons. I also discuss evidence that the antishadowing of nuclear structure functions is nonuniversal; i.e., flavor dependent, and why shadowing and antishadowing phenomena may be incompatiblemore » with the momentum and other sum rules for the nuclear parton distribution functions.« less

  16. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, Anatoly V.

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  17. An extension of the Laplace transform to Schwartz distributions

    NASA Technical Reports Server (NTRS)

    Price, D. R.

    1974-01-01

    A characterization of the Laplace transform is developed which extends the transform to the Schwartz distributions. The class of distributions includes the impulse functions and other singular functions which occur as solutions to ordinary and partial differential equations. The standard theorems on analyticity, uniqueness, and invertibility of the transform are proved by using the characterization as the definition of the Laplace transform. The definition uses sequences of linear transformations on the space of distributions which extends the Laplace transform to another class of generalized functions, the Mikusinski operators. It is shown that the sequential definition of the transform is equivalent to Schwartz' extension of the ordinary Laplace transform to distributions but, in contrast to Schwartz' definition, does not use the distributional Fourier transform. Several theorems concerning the particular linear transformations used to define the Laplace transforms are proved. All the results proved in one dimension are extended to the n-dimensional case, but proofs are presented only for those situations that require methods different from their one-dimensional analogs.

  18. Elastic field of a spherical inclusion with non-uniform eigenfields in second strain gradient elasticity

    NASA Astrophysics Data System (ADS)

    Delfani, M. R.; Latifi Shahandashti, M.

    2017-09-01

    In this paper, within the complete form of Mindlin's second strain gradient theory, the elastic field of an isolated spherical inclusion embedded in an infinitely extended homogeneous isotropic medium due to a non-uniform distribution of eigenfields is determined. These eigenfields, in addition to eigenstrain, comprise eigen double and eigen triple strains. After the derivation of a closed-form expression for Green's function associated with the problem, two different cases of non-uniform distribution of the eigenfields are considered as follows: (i) radial distribution, i.e. the distributions of the eigenfields are functions of only the radial distance of points from the centre of inclusion, and (ii) polynomial distribution, i.e. the distributions of the eigenfields are polynomial functions in the Cartesian coordinates of points. While the obtained solution for the elastic field of the latter case takes the form of an infinite series, the solution to the former case is represented in a closed form. Moreover, Eshelby's tensors associated with the two mentioned cases are obtained.

  19. Relaxation of ferroelectric states in 2D distributions of quantum dots: EELS simulation

    NASA Astrophysics Data System (ADS)

    Cortés, C. M.; Meza-Montes, L.; Moctezuma, R. E.; Carrillo, J. L.

    2016-06-01

    The relaxation time of collective electronic states in a 2D distribution of quantum dots is investigated theoretically by simulating EELS experiments. From the numerical calculation of the probability of energy loss of an electron beam, traveling parallel to the distribution, it is possible to estimate the damping time of ferroelectric-like states. We generate this collective response of the distribution by introducing a mean field interaction among the quantum dots, and then, the model is extended incorporating effects of long-range correlations through a Bragg-Williams approximation. The behavior of the dielectric function, the energy loss function, and the relaxation time of ferroelectric-like states is then investigated as a function of the temperature of the distribution and the damping constant of the electronic states in the single quantum dots. The robustness of the trends and tendencies of our results indicate that this scheme of analysis can guide experimentalists to develop tailored quantum dots distributions for specific applications.

  20. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE PAGES

    Radyushkin, Anatoly V.

    2017-08-28

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  1. Estimating sales and sales market share from sales rank data for consumer appliances

    NASA Astrophysics Data System (ADS)

    Touzani, Samir; Van Buskirk, Robert

    2016-06-01

    Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.

  2. Constraining the double gluon distribution by the single gluon distribution

    DOE PAGES

    Golec-Biernat, Krzysztof; Lewandowska, Emilia; Serino, Mirko; ...

    2015-10-03

    We show how to consistently construct initial conditions for the QCD evolution equations for double parton distribution functions in the pure gluon case. We use to momentum sum rule for this purpose and a specific form of the known single gluon distribution function in the MSTW parameterization. The resulting double gluon distribution satisfies exactly the momentum sum rule and is parameter free. Furthermore, we study numerically its evolution with a hard scale and show the approximate factorization into product of two single gluon distributions at small values of x, whereas at large values of x the factorization is always violatedmore » in agreement with the sum rule.« less

  3. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy.

    PubMed

    Shizgal, Bernie D

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].

  4. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Shizgal, Bernie D.

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].

  5. Non-Gaussian Distributions Affect Identification of Expression Patterns, Functional Annotation, and Prospective Classification in Human Cancer Genomes

    PubMed Central

    Marko, Nicholas F.; Weil, Robert J.

    2012-01-01

    Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863

  6. Kinetic analysis of spin current contribution to spectrum of electromagnetic waves in spin-1/2 plasma. I. Dielectric permeability tensor for magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Andreev, Pavel A.

    2017-02-01

    The dielectric permeability tensor for spin polarized plasmas is derived in terms of the spin-1/2 quantum kinetic model in six-dimensional phase space. Expressions for the distribution function and spin distribution function are derived in linear approximations on the path of dielectric permeability tensor derivation. The dielectric permeability tensor is derived for the spin-polarized degenerate electron gas. It is also discussed at the finite temperature regime, where the equilibrium distribution function is presented by the spin-polarized Fermi-Dirac distribution. Consideration of the spin-polarized equilibrium states opens possibilities for the kinetic modeling of the thermal spin current contribution in the plasma dynamics.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Hai P.; Cambier, Jean -Luc

    Here, we present a numerical model and a set of conservative algorithms for Non-Maxwellian plasma kinetics with inelastic collisions. These algorithms self-consistently solve for the time evolution of an isotropic electron energy distribution function interacting with an atomic state distribution function of an arbitrary number of levels through collisional excitation, deexcitation, as well as ionization and recombination. Electron-electron collisions, responsible for thermalization of the electron distribution, are also included in the model. The proposed algorithms guarantee mass/charge and energy conservation in a single step, and is applied to the case of non-uniform gridding of the energy axis in the phasemore » space of the electron distribution function. Numerical test cases are shown to demonstrate the accuracy of the method and its conservation properties.« less

  8. A Concept for Measuring Electron Distribution Functions Using Collective Thomson Scattering

    NASA Astrophysics Data System (ADS)

    Milder, A. L.; Froula, D. H.

    2017-10-01

    A.B. Langdon proposed that stable non-Maxwellian distribution functions are realized in coronal inertial confinement fusion plasmas via inverse bremsstrahlung heating. For Zvosc2 Zvosc2 vth2 > 1 , vth2 > 1 , the inverse bremsstrahlung heating rate is sufficiently fast to compete with electron-electron collisions. This process preferentially heats the subthermal electrons leading to super-Gaussian distribution functions. A method to identify the super-Gaussian order of the distribution functions in these plasmas using collective Thomson scattering will be proposed. By measuring the collective Thomson spectra over a range of angles the density, temperature and super-Gaussian order can be determined. This is accomplished by fitting non-Maxwellian distribution data with a super-Gaussian model; in order to match the density and electron temperature to within 10%, the super-Gaussian order must be varied. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  9. Probe measurements of the electron velocity distribution function in beams: Low-voltage beam discharge in helium

    NASA Astrophysics Data System (ADS)

    Sukhomlinov, V.; Mustafaev, A.; Timofeev, N.

    2018-04-01

    Previously developed methods based on the single-sided probe technique are altered and applied to measure the anisotropic angular spread and narrow energy distribution functions of charged particle (electron and ion) beams. The conventional method is not suitable for some configurations, such as low-voltage beam discharges, electron beams accelerated in near-wall and near-electrode layers, and vacuum electron beam sources. To determine the range of applicability of the proposed method, simple algebraic relationships between the charged particle energies and their angular distribution are obtained. The method is verified for the case of the collisionless mode of a low-voltage He beam discharge, where the traditional method for finding the electron distribution function with the help of a Legendre polynomial expansion is not applicable. This leads to the development of a physical model of the formation of the electron distribution function in a collisionless low-voltage He beam discharge. The results of a numerical calculation based on Monte Carlo simulations are in good agreement with the experimental data obtained using the new method.

  10. A Model Based on Environmental Factors for Diameter Distribution in Black Wattle in Brazil

    PubMed Central

    Sanquetta, Carlos Roberto; Behling, Alexandre; Dalla Corte, Ana Paula; Péllico Netto, Sylvio; Rodrigues, Aurelio Lourenço; Simon, Augusto Arlindo

    2014-01-01

    This article discusses the dynamics of a diameter distribution in stands of black wattle throughout its growth cycle using the Weibull probability density function. Moreover, the parameters of this distribution were related to environmental variables from meteorological data and surface soil horizon with the aim of finding a model for diameter distribution which their coefficients were related to the environmental variables. We found that the diameter distribution of the stand changes only slightly over time and that the estimators of the Weibull function are correlated with various environmental variables, with accumulated rainfall foremost among them. Thus, a model was obtained in which the estimators of the Weibull function are dependent on rainfall. Such a function can have important applications, such as in simulating growth potential in regions where historical growth data is lacking, as well as the behavior of the stand under different environmental conditions. The model can also be used to project growth in diameter, based on the rainfall affecting the forest over a certain time period. PMID:24932909

  11. Red cell distribution width does not predict stroke severity or functional outcome.

    PubMed

    Ntaios, George; Gurer, Ozgur; Faouzi, Mohamed; Aubert, Carole; Michel, Patrik

    2012-01-01

    Red cell distribution width was recently identified as a predictor of cardiovascular and all-cause mortality in patients with previous stroke. Red cell distribution width is also higher in patients with stroke compared with those without. However, there are no data on the association of red cell distribution width, assessed during the acute phase of ischemic stroke, with stroke severity and functional outcome. In the present study, we sought to investigate this relationship and ascertain the main determinants of red cell distribution width in this population. We used data from the Acute Stroke Registry and Analysis of Lausanne for patients between January 2003 and December 2008. Red cell distribution width was generated at admission by the Sysmex XE-2100 automated cell counter from ethylene diamine tetraacetic acid blood samples stored at room temperature until measurement. An χ(2) -test was performed to compare frequencies of categorical variables between different red cell distribution width quartiles, and one-way analysis of variance for continuous variables. The effect of red cell distribution width on severity and functional outcome was investigated in univariate and multivariate robust regression analysis. Level of significance was set at 95%. There were 1504 patients (72±15·76 years, 43·9% females) included in the analysis. Red cell distribution width was significantly associated to NIHSS (β-value=0·24, P=0·01) and functional outcome (odds ratio=10·73 for poor outcome, P<0·001) at univariate analysis but not multivariate. Prehospital Rankin score (β=0·19, P<0·001), serum creatinine (β=0·008, P<0·001), hemoglobin (β=-0·009, P<0·001), mean platelet volume (β=0·09, P<0·05), age (β=0·02, P<0·001), low ejection fraction (β=0·66, P<0·001) and antihypertensive treatment (β=0·32, P<0·001) were independent determinants of red cell distribution width. Red cell distribution width, assessed during the early phase of acute ischemic stroke, does not predict severity or functional outcome. © 2011 The Authors. International Journal of Stroke © 2011 World Stroke Organization.

  12. Dielectric permeability tensor and linear waves in spin-1/2 quantum kinetics with non-trivial equilibrium spin-distribution functions

    NASA Astrophysics Data System (ADS)

    Andreev, Pavel A.; Kuz'menkov, L. S.

    2017-11-01

    A consideration of waves propagating parallel to the external magnetic field is presented. The dielectric permeability tensor is derived from the quantum kinetic equations with non-trivial equilibrium spin-distribution functions in the linear approximation on the amplitude of wave perturbations. It is possible to consider the equilibrium spin-distribution functions with nonzero z-projection proportional to the difference of the Fermi steps of electrons with the chosen spin direction, while x- and y-projections are equal to zero. It is called the trivial equilibrium spin-distribution functions. In the general case, x- and y-projections of the spin-distribution functions are nonzero which is called the non-trivial regime. A corresponding equilibrium solution is found in Andreev [Phys. Plasmas 23, 062103 (2016)]. The contribution of the nontrivial part of the spin-distribution function appears in the dielectric permeability tensor in the additive form. It is explicitly found here. A corresponding modification in the dispersion equation for the transverse waves is derived. The contribution of the nontrivial part of the spin-distribution function in the spectrum of transverse waves is calculated numerically. It is found that the term caused by the nontrivial part of the spin-distribution function can be comparable with the classic terms for the relatively small wave vectors and frequencies above the cyclotron frequency. In a majority of regimes, the extra spin caused term dominates over the spin term found earlier, except the small frequency regime, where their contributions in the whistler spectrum are comparable. A decrease of the left-hand circularly polarized wave frequency, an increase of the high-frequency right-hand circularly polarized wave frequency, and a decrease of frequency changing by an increase of frequency at the growth of the wave vector for the whistler are found. A considerable decrease of the spin wave frequency is found either. It results in an increase of module of the negative group velocity of the spin wave. The found dispersion equations are used for obtaining of an effective quantum hydrodynamics reproducing these results. This generalization requires the introduction of the corresponding equation of state for the thermal part of the spin current in the spin evolution equation.

  13. VizieR Online Data Catalog: FourStar galaxy evolution survey (ZFOURGE) (Straatman+, 2016)

    NASA Astrophysics Data System (ADS)

    Straatman, C. M. S.; Spitler, L. R.; Quadri, R. F.; Labbe, I.; Glazebrook, K.; Persson, S. E.; Papovich, C.; Tran, K.-V.; Brammer, G. B.; Cowley, M.; Tomczak, A.; Nanayakkara, T.; Alcorn, L.; Allen, R.; Broussard, A.; van Dokkum, P.; Forrest, B.; van Houdt, J.; Kacprzak, G. G.; Kawinwanichakij, L.; Kelson, D. D.; Lee, J.; McCarthy, P. J.; Mehrtens, N.; Monson, A.; Murphy, D.; Rees, G.; Tilvi, V.; Whitaker, K. E.

    2017-03-01

    We present the FourStar galaxy evolution survey (ZFOURGE) photometric catalogs comprising >70000 galaxies, selected from ultradeep Ks-band detection images (25.5-26.5 AB mag, 5σ, total). We use 5 near-IR medium-bandwidth filters (J1, J2, J3, Hs, Hl) as well as broad-band Ks at 1.05-2.16 micron to 25-26 AB at a seeing of ~0.5 arcsec. Each field has ancillary imaging in 26-40 filters at 0.3-8 micron. We derive photometric redshifts, rest-frame U-V and V-J colors, and stellar population properties from SED fitting. The photometric redshifts have uncertainty σz=0.010, 0.009, and 0.011 in CDFS, COSMOS and UDS, respectively, if compared with spectroscopic redshifts. A pair test indicates σz,pairs=0.01-0.02 at 1

  14. Formation of dioxins from incineration of foods found in domestic garbage.

    PubMed

    Katami, Takeo; Yasuhara, Akio; Shibamoto, Takayuki

    2004-02-15

    There has been great concern about the large amounts of garbage produced by domestic households in the modern world. One of the major sources of dioxins (PCDDs, PCDFs, and coplanar PCBs) in the environment is the combustion of domestic waste materials. Exhaust gases from an incinerator, in which mixtures of 67 food items--including fruits, vegetables, pasta, seafoods, meats, and processed foods and seasoned foods--were analyzed for dioxins. Gases collected at the chimney port (9.15 ng/g) contained less total dioxins than those collected at the chamber port (29.1 ng/g). The levels of Cl1-Cl6-PCDDs and Cl1-Cl5-PCDFs were much lower in the gas collected at the chimney port than in the gas collected at the chamber port. The levels of Cl7-Cl8-PCDDs and Cl6-Cl8-PCDFs were higher in the gas collected at the chimney port than in the gas collected at the chamber port. A total of Cl4-Cl8-PCDDs (1.84-3.04 ng/g) comprised over 80% of the total PCDDs formed (2.24-4.00 ng/g). Total PCDFs (16.2-22.6 ng/g) comprised 78-86% of the total dioxins formed (18.9-29.1 ng/g). The PCDFs formed in the greatest amounts were M1CDFs (9.68-10.7 ng/g). Mixtures of commonly consumed food items produced ppb levels of total dioxins in exhaust gases upon combustion, suggesting that incineration of domestic food wastes is one of the sources of dioxins in the environment. A mixture containing some seasoned foods, such as mayonnaise spread on bread, produced more dioxins (29.1 ng/g) than a mixture without seasoned foods did (18.9 ng/g).

  15. AEGIS-X: Deep Chandra Imaging of the Central Groth Strip

    NASA Astrophysics Data System (ADS)

    Nandra, K.; Laird, E. S.; Aird, J. A.; Salvato, M.; Georgakakis, A.; Barro, G.; Perez-Gonzalez, P. G.; Barmby, P.; Chary, R.-R.; Coil, A.; Cooper, M. C.; Davis, M.; Dickinson, M.; Faber, S. M.; Fazio, G. G.; Guhathakurta, P.; Gwyn, S.; Hsu, L.-T.; Huang, J.-S.; Ivison, R. J.; Koo, D. C.; Newman, J. A.; Rangel, C.; Yamada, T.; Willmer, C.

    2015-09-01

    We present the results of deep Chandra imaging of the central region of the Extended Groth Strip, the AEGIS-X Deep (AEGIS-XD) survey. When combined with previous Chandra observations of a wider area of the strip, AEGIS-X Wide (AEGIS-XW), these provide data to a nominal exposure depth of 800 ks in the three central ACIS-I fields, a region of approximately 0.29 deg2. This is currently the third deepest X-ray survey in existence; a factor ∼ 2-3 shallower than the Chandra Deep Fields (CDFs), but over an area ∼3 times greater than each CDF. We present a catalog of 937 point sources detected in the deep Chandra observations, along with identifications of our X-ray sources from deep ground-based, Spitzer, GALEX, and Hubble Space Telescope imaging. Using a likelihood ratio analysis, we associate multiband counterparts for 929/937 of our X-ray sources, with an estimated 95% reliability, making the identification completeness approximately 94% in a statistical sense. Reliable spectroscopic redshifts for 353 of our X-ray sources are available predominantly from Keck (DEEP2/3) and MMT Hectospec, so the current spectroscopic completeness is ∼38%. For the remainder of the X-ray sources, we compute photometric redshifts based on multiband photometry in up to 35 bands from the UV to mid-IR. Particular attention is given to the fact that the vast majority the X-ray sources are active galactic nuclei and require hybrid templates. Our photometric redshifts have mean accuracy of σ =0.04 and an outlier fraction of approximately 5%, reaching σ =0.03 with less than 4% outliers in the area covered by CANDELS . The X-ray, multiwavelength photometry, and redshift catalogs are made publicly available.

  16. VizieR Online Data Catalog: Improved multi-band photometry from SERVS (Nyland+, 2017)

    NASA Astrophysics Data System (ADS)

    Nyland, K.; Lacy, M.; Sajina, A.; Pforr, J.; Farrah, D.; Wilson, G.; Surace, J.; Haussler, B.; Vaccari, M.; Jarvis, M.

    2017-07-01

    The Spitzer Extragalactic Representative Volume Survey (SERVS) sky footprint includes five well-studied astronomical deep fields with abundant multi-wavelength data spanning an area of ~18deg2 and a co-moving volume of ~0.8Gpc3. The five deep fields included in SERVS are the XMM-LSS field, Lockman Hole (LH), ELAIS-N1 (EN1), ELAIS-S1 (ES1), and Chandra Deep Field South (CDFS). SERVS provides NIR, post-cryogenic imaging in the 3.6 and 4.5um Spitzer/IRAC bands to a depth of ~2uJy. IRAC dual-band source catalogs generated using traditional catalog extraction methods are described in Mauduit+ (2012PASP..124..714M). The Spitzer IRAC data are complemented by ground-based NIR observations from the VISTA Deep Extragalactic Observations (VIDEO; Jarvis+ 2013MNRAS.428.1281J) survey in the south in the Z, Y, J, H, and Ks bands and UKIRT Infrared Deep Sky Survey (UKIDSS; Lawrence+ 2007, see II/319) in the north in the J and K bands. SERVS also provides substantial overlap with infrared data from SWIRE (Lonsdale+ 2003PASP..115..897L) and the Herschel Multitiered Extragalactic Survey (HerMES; Oliver+ 2012, VIII/95). As shown in Figure 1, one square degree of the XMM-LSS field overlaps with ground-based optical data from the Canada-France-Hawaii Telescope Legacy Survey Deep field 1 (CFHTLS-D1). The CFHTLS-D1 region is centered at RAJ2000=02:25:59, DEJ2000=-04:29:40 and includes imaging through the filter set u', g', r', i', and z'. Thus, in combination with the NIR data from SERVS and VIDEO that overlap with the CFHTLS-D1 region, multi-band imaging over a total of 12 bands is available. (2 data files).

  17. A Physical Parameterization of the Evolution of X-ray Binary Emission

    NASA Astrophysics Data System (ADS)

    Gilbertson, Woodrow; Lehmer, Bret; Eufrasio, Rafael

    2018-01-01

    The Chandra Deep Field-South (CDF-S) and North (CDF-N) surveys, 7 Ms and 2 Ms respectively, contain measurements spanning a large redshift range of z = 0 to 7. These data-rich fields provide a unique window into the cosmic history of X-ray emission from normal galaxies (i.e., not dominated by AGN). Scaling relations between normal-galaxy X-ray luminosity and quantities, such as star formation rate (SFR) and stellar mass (M*), have been used to constrain the redshift evolution of the formation rates of low-mass X-ray binaries (LMXB) and high-mass X-ray binaries (HMXB). However, these measurements do not directly reveal the driving forces behind the redshift evolution of X-ray binaries (XRBs). We hypothesize that changes in the mean stellar age and metallicity of the Universe drive the evolution of LMXB and HMXB emission, respectively. We use star-formation histories, derived through fitting broad-band UV-to-far-IR spectra, to estimate the masses of stellar populations in various age bins for each galaxy. We then divide our galaxy samples into bins of metallicity, and use our star-formation history information and measured X-ray luminosities to determine for each metallicity bin a best model LX/M*(tage). We show that this physical model provides a more useful parameterization of the evolution of X-ray binary emission, as it can be extrapolated out to high redshifts with more sensible predictions. This meaningful relation can be used to better estimate the emission of XRBs in the early Universe, where XRBs are predicted to play an important role in heating the intergalactic medium.

  18. Systems of frequency distributions for water and environmental engineering

    NASA Astrophysics Data System (ADS)

    Singh, Vijay P.

    2018-09-01

    A wide spectrum of frequency distributions are used in hydrologic, hydraulic, environmental and water resources engineering. These distributions may have different origins, are based on different hypotheses, and belong to different generating systems. Review of literature suggests that different systems of frequency distributions employed in science and engineering in general and environmental and water engineering in particular have been derived using different approaches which include (1) differential equations, (2) distribution elasticity, (3) genetic theory, (4) generating functions, (5) transformations, (6) Bessel function, (7) expansions, and (8) entropy maximization. This paper revisits these systems of distributions and discusses the hypotheses that are used for deriving these systems. It also proposes, based on empirical evidence, another general system of distributions and derives a number of distributions from this general system that are used in environmental and water engineering.

  19. Multivariate η-μ fading distribution with arbitrary correlation model

    NASA Astrophysics Data System (ADS)

    Ghareeb, Ibrahim; Atiani, Amani

    2018-03-01

    An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.

  20. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.

    2005-01-01

    Cloud microphysics are inevitable affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds, Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effect of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, a detailed spectral-bin microphysical scheme was implemented into the the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bim microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions.

  1. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    PubMed

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.

  3. Eddington's demon: inferring galaxy mass functions and other distributions from uncertain data

    NASA Astrophysics Data System (ADS)

    Obreschkow, D.; Murray, S. G.; Robotham, A. S. G.; Westmeier, T.

    2018-03-01

    We present a general modified maximum likelihood (MML) method for inferring generative distribution functions from uncertain and biased data. The MML estimator is identical to, but easier and many orders of magnitude faster to compute than the solution of the exact Bayesian hierarchical modelling of all measurement errors. As a key application, this method can accurately recover the mass function (MF) of galaxies, while simultaneously dealing with observational uncertainties (Eddington bias), complex selection functions and unknown cosmic large-scale structure. The MML method is free of binning and natively accounts for small number statistics and non-detections. Its fast implementation in the R-package dftools is equally applicable to other objects, such as haloes, groups, and clusters, as well as observables other than mass. The formalism readily extends to multidimensional distribution functions, e.g. a Choloniewski function for the galaxy mass-angular momentum distribution, also handled by dftools. The code provides uncertainties and covariances for the fitted model parameters and approximate Bayesian evidences. We use numerous mock surveys to illustrate and test the MML method, as well as to emphasize the necessity of accounting for observational uncertainties in MFs of modern galaxy surveys.

  4. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  5. The tensor distribution function.

    PubMed

    Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M

    2009-01-01

    Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.

  6. Probability and Statistics in Sensor Performance Modeling

    DTIC Science & Technology

    2010-12-01

    language software program is called Environmental Awareness for Sensor and Emitter Employment. Some important numerical issues in the implementation...3 Statistical analysis for measuring sensor performance...complementary cumulative distribution function cdf cumulative distribution function DST decision-support tool EASEE Environmental Awareness of

  7. Lattice QCD exploration of parton pseudo-distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orginos, Kostas; Radyushkin, Anatoly; Karpie, Joseph

    Here, we demonstrate a new method of extracting parton distributions from lattice calculations. The starting idea is to treat the generic equal-time matrix elementmore » $${\\cal M} (Pz_3, z_3^2)$$ as a function of the Ioffe time $$\

  8. Lattice QCD exploration of parton pseudo-distribution functions

    DOE PAGES

    Orginos, Kostas; Radyushkin, Anatoly; Karpie, Joseph; ...

    2017-11-08

    Here, we demonstrate a new method of extracting parton distributions from lattice calculations. The starting idea is to treat the generic equal-time matrix elementmore » $${\\cal M} (Pz_3, z_3^2)$$ as a function of the Ioffe time $$\

  9. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    DOE PAGES

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; ...

    2016-07-05

    Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.

  10. Calibration of high-dynamic-range, finite-resolution x-ray pulse-height spectrometers for extracting electron energy distribution data from the PFRC-2 device

    NASA Astrophysics Data System (ADS)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2017-10-01

    Knowledge of the full x-ray energy distribution function (XEDF) emitted from a plasma over a large dynamic range of energies can yield valuable insights about the electron energy distribution function (EEDF) of that plasma and the dynamic processes that create them. X-ray pulse height detectors such as Amptek's X-123 Fast SDD with Silicon Nitride window can detect x-rays in the range of 200eV to 100s of keV. However, extracting EEDF from this measurement requires precise knowledge of the detector's response function. This response function, including the energy scale calibration, the window transmission function, and the resolution function, can be measured directly. We describe measurements of this function from x-rays from a mono-energetic electron beam in a purpose-built gas-target x-ray tube. Large-Z effects such as line radiation, nuclear charge screening, and polarizational Bremsstrahlung are discussed.

  11. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  12. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    PubMed Central

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  13. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption.

    PubMed

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-29

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  14. Characteristics of Ion Distribution Functions in Dipolarizing FluxBundles: THEMIS Event Studies

    NASA Astrophysics Data System (ADS)

    Runov, A.; Artemyev, A.; Birn, J.; Pritchett, P. L.; Zhou, X.

    2016-12-01

    Taking advantage of multi-point observations from repeating configuration of the Time History of Events and Macroscale Interactions during Substorms (THEMIS) fleet with probe separation of 1 to 2 Earth radii (RE) along X, Y, and Z in the geocentric solar magnetospheric system (GSM), we study ion distribution functions observed by the probes during three transient dipolarization events. Comparing observations by the multiple probes, we characterize changes in the ion distribution functions with respect to geocentric distance (X), cross-tail probe separation (Y), and levels of |Bx|, which characterize the distance from the neutral sheet. We examined 2-D and 1-D cuts of the 3-D velocity distribution functions by the {Vb,Vbxv} plane. The results indicate that the velocity distribution functions observed inside the dipolarizing flux bundles (DFB) close to the magnetic equator are often perpendicularly anisotropic for velocities Vth≤v≤2Vth, where Vth is the ion thermal velocity. Ions of higher energies (v>2Vth) are isotropic. Hence, interaction of DFBs and ambient ions may result in the perpendicular anisotropy of the injecting energetic ions, which is an important factor for plasma waves and instabilities excitation and further particle acceleration in the inner magnetosphere. We also compare the observations with the results of test-particles and PIC simulations.

  15. Electron energy distribution function in the divertor region of the COMPASS tokamak during neutral beam injection heating

    NASA Astrophysics Data System (ADS)

    Hasan, E.; Dimitrova, M.; Havlicek, J.; Mitošinková, K.; Stöckel, J.; Varju, J.; Popov, Tsv K.; Komm, M.; Dejarnac, R.; Hacek, P.; Panek, R.; the COMPASS Team

    2018-02-01

    This paper presents the results from swept probe measurements in the divertor region of the COMPASS tokamak in D-shaped, L-mode discharges, with toroidal magnetic field BT = 1.15 T, plasma current Ip = 180 kA and line-average electron densities varying from 2 to 8×1019 m-3. Using neutral beam injection heating, the electron energy distribution function is studied before and during the application of the beam. The current-voltage characteristics data are processed using the first-derivative probe technique. This technique allows one to evaluate the plasma potential and the real electron energy distribution function (respectively, the electron temperatures and densities). At the low average electron density of 2×1019 m-3, the electron energy distribution function is bi-Maxwellian with a low-energy electron population with temperatures 4-6 eV and a high-energy electron group 12-25 eV. As the line-average electron density is increased, the electron temperatures decrease. At line-average electron densities above 7×1019 m-3, the electron energy distribution function is found to be Maxwellian with a temperature of 6-8.5 eV. The effect of the neutral beam injection heating power in the divertor region is also studied.

  16. Dependence of Microlensing on Source Size and Lens Mass

    NASA Astrophysics Data System (ADS)

    Congdon, A. B.; Keeton, C. R.

    2007-11-01

    In gravitational lensed quasars, the magnification of an image depends on the configuration of stars in the lensing galaxy. We study the statistics of the magnification distribution for random star fields. The width of the distribution characterizes the amount by which the observed magnification is likely to differ from models in which the mass is smoothly distributed. We use numerical simulations to explore how the width of the magnification distribution depends on the mass function of stars, and on the size of the source quasar. We then propose a semi-analytic model to describe the distribution width for different source sizes and stellar mass functions.

  17. Wigner functions for evanescent waves.

    PubMed

    Petruccelli, Jonathan C; Tian, Lei; Oh, Se Baek; Barbastathis, George

    2012-09-01

    We propose phase space distributions, based on an extension of the Wigner distribution function, to describe fields of any state of coherence that contain evanescent components emitted into a half-space. The evanescent components of the field are described in an optical phase space of spatial position and complex-valued angle. Behavior of these distributions upon propagation is also considered, where the rapid decay of the evanescent components is associated with the exponential decay of the associated phase space distributions. To demonstrate the structure and behavior of these distributions, we consider the fields generated from total internal reflection of a Gaussian Schell-model beam at a planar interface.

  18. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  19. The best thermoelectric.

    PubMed Central

    Mahan, G D; Sofo, J O

    1996-01-01

    What electronic structure provides the largest figure of merit for thermoelectric materials? To answer that question, we write the electrical conductivity, thermopower, and thermal conductivity as integrals of a single function, the transport distribution. Then we derive the mathematical function for the transport distribution, which gives the largest figure of merit. A delta-shaped transport distribution is found to maximize the thermoelectric properties. This result indicates that a narrow distribution of the energy of the electrons participating in the transport process is needed for maximum thermoelectric efficiency. Some possible realizations of this idea are discussed. PMID:11607692

  20. Faà di Bruno's formula and the distributions of random partitions in population genetics and physics.

    PubMed

    Hoppe, Fred M

    2008-06-01

    We show that the formula of Faà di Bruno for the derivative of a composite function gives, in special cases, the sampling distributions in population genetics that are due to Ewens and to Pitman. The composite function is the same in each case. Other sampling distributions also arise in this way, such as those arising from Dirichlet, multivariate hypergeometric, and multinomial models, special cases of which correspond to Bose-Einstein, Fermi-Dirac, and Maxwell-Boltzmann distributions in physics. Connections are made to compound sampling models.

  1. Fuzzy-Neural Controller in Service Requests Distribution Broker for SOA-Based Systems

    NASA Astrophysics Data System (ADS)

    Fras, Mariusz; Zatwarnicka, Anna; Zatwarnicki, Krzysztof

    The evolution of software architectures led to the rising importance of the Service Oriented Architecture (SOA) concept. This architecture paradigm support building flexible distributed service systems. In the paper the architecture of service request distribution broker designed for use in SOA-based systems is proposed. The broker is built with idea of fuzzy control. The functional and non-functional request requirements in conjunction with monitoring of execution and communication links are used to distribute requests. Decisions are made with use of fuzzy-neural network.

  2. AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjib; Bland-Hawthorn, Joss

    2013-08-20

    An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less

  3. On the distribution of a product of N Gaussian random variables

    NASA Astrophysics Data System (ADS)

    Stojanac, Željka; Suess, Daniel; Kliesch, Martin

    2017-08-01

    The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.

  4. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  5. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  6. Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems.

    PubMed

    Whitacre, James M; Bender, Axel

    2010-06-15

    A generic mechanism--networked buffering--is proposed for the generation of robust traits in complex systems. It requires two basic conditions to be satisfied: 1) agents are versatile enough to perform more than one single functional role within a system and 2) agents are degenerate, i.e. there exists partial overlap in the functional capabilities of agents. Given these prerequisites, degenerate systems can readily produce a distributed systemic response to local perturbations. Reciprocally, excess resources related to a single function can indirectly support multiple unrelated functions within a degenerate system. In models of genome:proteome mappings for which localized decision-making and modularity of genetic functions are assumed, we verify that such distributed compensatory effects cause enhanced robustness of system traits. The conditions needed for networked buffering to occur are neither demanding nor rare, supporting the conjecture that degeneracy may fundamentally underpin distributed robustness within several biotic and abiotic systems. For instance, networked buffering offers new insights into systems engineering and planning activities that occur under high uncertainty. It may also help explain recent developments in understanding the origins of resilience within complex ecosystems.

  7. Distribution of functional traits in subtropical trees across environmental and forest use gradients

    NASA Astrophysics Data System (ADS)

    Blundo, Cecilia; Malizia, Lucio R.; González-Espinosa, Mario

    2015-11-01

    The relationship between functional traits and environmental factors contribute to understanding community structure and predicting which species will be able to elude environmental filters in different habitats. We selected 10 functional traits related to morphology, demography and regeneration niche in 54 subtropical premontane tree species to describe their main axes of functional differentiation. We derived species traits, environmental variables and species abundance data from 20 1-ha permanent plots established in a seasonal subtropical premontane forest in northwestern Argentina. We analyzed the relationship between species functional traits and environmental factors through RLQ and fourth-corner analyzes. We found an axis of structural differentiation that segregates understory from canopy species, and an axis of functional differentiation that segregates species that maximize resource acquisition from those that promote resource conservation. Environmental and forest use gradients operate hierarchically over subtropical premontane tree species influencing the distribution of demographic and morphological traits. The interaction between climatic and topographic factors influences the distribution of species functional traits at the regional scale. In addition, the history of forest use seems to operate at the landscape scale and explains the distribution of species traits reflecting a trade-off between resource acquisition and resource conservation strategies in secondary forests across different successional stages. Our results support the idea that functional traits may be used to analyze community structure and dynamics through niche differentiation and environmental filtering processes.

  8. The function of the earth observing system - Data information system Distributed Active Archive Centers

    NASA Technical Reports Server (NTRS)

    Lapenta, C. C.

    1992-01-01

    The functionality of the Distributed Active Archive Centers (DAACs) which are significant elements of the Earth Observing System Data and Information System (EOSDIS) is discussed. Each DAAC encompasses the information management system, the data archival and distribution system, and the product generation system. The EOSDIS DAACs are expected to improve the access to earth science data set needed for global change research.

  9. MaxEnt, second variation, and generalized statistics

    NASA Astrophysics Data System (ADS)

    Plastino, A.; Rocca, M. C.

    2015-10-01

    There are two kinds of Tsallis-probability distributions: heavy tail ones and compact support distributions. We show here, by appeal to functional analysis' tools, that for lower bound Hamiltonians, the second variation's analysis of the entropic functional guarantees that the heavy tail q-distribution constitutes a maximum of Tsallis' entropy. On the other hand, in the compact support instance, a case by case analysis is necessary in order to tackle the issue.

  10. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  11. Confined active Brownian particles: theoretical description of propulsion-induced accumulation

    NASA Astrophysics Data System (ADS)

    Das, Shibananda; Gompper, Gerhard; Winkler, Roland G.

    2018-01-01

    The stationary-state distribution function of confined active Brownian particles (ABPs) is analyzed by computer simulations and analytical calculations. We consider a radial harmonic as well as an anharmonic confinement potential. In the simulations, the ABP is propelled with a prescribed velocity along a body-fixed direction, which is changing in a diffusive manner. For the analytical approach, the Cartesian components of the propulsion velocity are assumed to change independently; active Ornstein-Uhlenbeck particle (AOUP). This results in very different velocity distribution functions. The analytical solution of the Fokker-Planck equation for an AOUP in a harmonic potential is presented and a conditional distribution function is provided for the radial particle distribution at a given magnitude of the propulsion velocity. This conditional probability distribution facilitates the description of the coupling of the spatial coordinate and propulsion, which yields activity-induced accumulation of particles. For the anharmonic potential, a probability distribution function is derived within the unified colored noise approximation. The comparison of the simulation results with theoretical predictions yields good agreement for large rotational diffusion coefficients, e.g. due to tumbling, even for large propulsion velocities (Péclet numbers). However, we find significant deviations already for moderate Péclet number, when the rotational diffusion coefficient is on the order of the thermal one.

  12. GridPV Toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broderick, Robert; Quiroz, Jimmy; Grijalva, Santiago

    2014-07-15

    Matlab Toolbox for simulating the impact of solar energy on the distribution grid. The majority of the functions are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving GridPV Toolbox information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in the OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included to show potential uses of the toolbox functions.

  13. ProbOnto: ontology and knowledge base of probability distributions.

    PubMed

    Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala

    2016-09-01

    Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  14. Electron Distribution Functions in the Diffusion Region of Asymmetric Magnetic Reconnection

    NASA Technical Reports Server (NTRS)

    Bessho, N.; Chen, L.-J.; Hesse, M.

    2016-01-01

    We study electron distribution functions in a diffusion region of antiparallel asymmetric reconnection by means of particle-in-cell simulations and analytical theory. At the electron stagnation point, the electron distribution comprises a crescent-shaped population and a core component. The crescent-shaped distribution is due to electrons coming from the magnetosheath toward the stagnation point and accelerated mainly by electric field normal to the current sheet. Only a part of magnetosheath electrons can reach the stagnation point and form the crescent-shaped distribution that has a boundary of a parabolic curve. The penetration length of magnetosheath electrons into the magnetosphere is derived. We expect that satellite observations can detect crescent-shaped electron distributions during magnetopause reconnection.

  15. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  16. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  17. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  18. TESTING FOR DIFFERENCES BETWEEN CUMULATIVE DISTRIBUTION FUNCTIONS FROM COMPLEX ENVIRONMENTAL SAMPLING SURVEYS

    EPA Science Inventory

    The U.S. Environmental Protection Agency's Environmental Monitoring and Assessment Program (EMAP) employs the cumulative distribution function (cdf) to measure the status of quantitative variables for resources of interest. The ability to compare cdf's for a resource from, say,...

  19. THE EPANET PROGRAMMER'S TOOLKIT FOR ANALYSIS OF WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    The EPANET Programmer's Toolkit is a collection of functions that helps simplify computer programming of water distribution network analyses. the functions can be used to read in a pipe network description file, modify selected component properties, run multiple hydraulic and wa...

  20. Closed-form solution for the Wigner phase-space distribution function for diffuse reflection and small-angle scattering in a random medium.

    PubMed

    Yura, H T; Thrane, L; Andersen, P E

    2000-12-01

    Within the paraxial approximation, a closed-form solution for the Wigner phase-space distribution function is derived for diffuse reflection and small-angle scattering in a random medium. This solution is based on the extended Huygens-Fresnel principle for the optical field, which is widely used in studies of wave propagation through random media. The results are general in that they apply to both an arbitrary small-angle volume scattering function, and arbitrary (real) ABCD optical systems. Furthermore, they are valid in both the single- and multiple-scattering regimes. Some general features of the Wigner phase-space distribution function are discussed, and analytic results are obtained for various types of scattering functions in the asymptotic limit s > 1, where s is the optical depth. In particular, explicit results are presented for optical coherence tomography (OCT) systems. On this basis, a novel way of creating OCT images based on measurements of the momentum width of the Wigner phase-space distribution is suggested, and the advantage over conventional OCT images is discussed. Because all previous published studies regarding the Wigner function are carried out in the transmission geometry, it is important to note that the extended Huygens-Fresnel principle and the ABCD matrix formalism may be used successfully to describe this geometry (within the paraxial approximation). Therefore for completeness we present in an appendix the general closed-form solution for the Wigner phase-space distribution function in ABCD paraxial optical systems for direct propagation through random media, and in a second appendix absorption effects are included.

  1. Disappearance of Anisotropic Intermittency in Large-amplitude MHD Turbulence and Its Comparison with Small-amplitude MHD Turbulence

    NASA Astrophysics Data System (ADS)

    Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua

    2018-03-01

    Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.

  2. Analysis of current distribution in a large superconductor

    NASA Astrophysics Data System (ADS)

    Hamajima, Takataro; Alamgir, A. K. M.; Harada, Naoyuki; Tsuda, Makoto; Ono, Michitaka; Takano, Hirohisa

    An imbalanced current distribution which is often observed in cable-in-conduit (CIC) superconductors composed of multistaged, triplet type sub-cables, can deteriorate the performance of the coils. It is, hence very important to analyze the current distribution in a superconductor and find out methods to realize a homogeneous current distribution in the conductor. We apply magnetic flux conservation in a loop contoured by electric center lines of filaments in two arbitrary strands located on adjacent layers in a coaxial multilayer superconductor, and thereby analyze the current distribution in the conductor. A generalized formula governing the current distribution can be described as explicit functions of the superconductor construction parameters, such as twist pitch, twist direction and radius of individual layer. We numerically analyze a homogeneous current distribution as a function of the twist pitches of layers, using the fundamental formula. Moreover, it is demonstrated that we can control current distribution in the coaxial superconductor.

  3. Conservative algorithms for non-Maxwellian plasma kinetics

    DOE PAGES

    Le, Hai P.; Cambier, Jean -Luc

    2017-12-08

    Here, we present a numerical model and a set of conservative algorithms for Non-Maxwellian plasma kinetics with inelastic collisions. These algorithms self-consistently solve for the time evolution of an isotropic electron energy distribution function interacting with an atomic state distribution function of an arbitrary number of levels through collisional excitation, deexcitation, as well as ionization and recombination. Electron-electron collisions, responsible for thermalization of the electron distribution, are also included in the model. The proposed algorithms guarantee mass/charge and energy conservation in a single step, and is applied to the case of non-uniform gridding of the energy axis in the phasemore » space of the electron distribution function. Numerical test cases are shown to demonstrate the accuracy of the method and its conservation properties.« less

  4. Environmental niche models for riverine desert fishes and their similarity according to phylogeny and functionality

    USGS Publications Warehouse

    Whitney, James E.; Whittier, Joanna B.; Paukert, Craig P.

    2017-01-01

    Environmental filtering and competitive exclusion are hypotheses frequently invoked in explaining species' environmental niches (i.e., geographic distributions). A key assumption in both hypotheses is that the functional niche (i.e., species traits) governs the environmental niche, but few studies have rigorously evaluated this assumption. Furthermore, phylogeny could be associated with these hypotheses if it is predictive of functional niche similarity via phylogenetic signal or convergent evolution, or of environmental niche similarity through phylogenetic attraction or repulsion. The objectives of this study were to investigate relationships between environmental niches, functional niches, and phylogenies of fishes of the Upper (UCRB) and Lower (LCRB) Colorado River Basins of southwestern North America. We predicted that functionally similar species would have similar environmental niches (i.e., environmental filtering) and that closely related species would be functionally similar (i.e., phylogenetic signal) and possess similar environmental niches (i.e., phylogenetic attraction). Environmental niches were quantified using environmental niche modeling, and functional similarity was determined using functional trait data. Nonnatives in the UCRB provided the only support for environmental filtering, which resulted from several warmwater nonnatives having dam number as a common predictor of their distributions, whereas several cool- and coldwater nonnatives shared mean annual air temperature as an important distributional predictor. Phylogenetic signal was supported for both natives and nonnatives in both basins. Lastly, phylogenetic attraction was only supported for native fishes in the LCRB and for nonnative fishes in the UCRB. Our results indicated that functional similarity was heavily influenced by evolutionary history, but that phylogenetic relationships and functional traits may not always predict the environmental distribution of species. However, the similarity of environmental niches among warmwater centrarchids, ictalurids, fundulids, and poeciliids in the UCRB indicated that dam removals could influence the distribution of these nonnatives simultaneously, thus providing greater conservation benefits. However, this same management strategy would have more limited effects on nonnative salmonids, catostomids, and percids with colder temperature preferences, thus necessitating other management strategies to control these species.

  5. New multidimensional functional diversity indices for a multifaceted framework in functional ecology.

    PubMed

    Villéger, Sébastien; Mason, Norman W H; Mouillot, David

    2008-08-01

    Functional diversity is increasingly identified as an important driver of ecosystem functioning. Various indices have been proposed to measure the functional diversity of a community, but there is still no consensus on which are most suitable. Indeed, none of the existing indices meets all the criteria required for general use. The main criteria are that they must be designed to deal with several traits, take into account abundances, and measure all the facets of functional diversity. Here we propose three indices to quantify each facet of functional diversity for a community with species distributed in a multidimensional functional space: functional richness (volume of the functional space occupied by the community), functional evenness (regularity of the distribution of abundance in this volume), and functional divergence (divergence in the distribution of abundance in this volume). Functional richness is estimated using the existing convex hull volume index. The new functional evenness index is based on the minimum spanning tree which links all the species in the multidimensional functional space. Then this new index quantifies the regularity with which species abundances are distributed along the spanning tree. Functional divergence is measured using a novel index which quantifies how species diverge in their distances (weighted by their abundance) from the center of gravity in the functional space. We show that none of the indices meets all the criteria required for a functional diversity index, but instead we show that the set of three complementary indices meets these criteria. Through simulations of artificial data sets, we demonstrate that functional divergence and functional evenness are independent of species richness and that the three functional diversity indices are independent of each other. Overall, our study suggests that decomposition of functional diversity into its three primary components provides a meaningful framework for its quantification and for the classification of existing functional diversity indices. This decomposition has the potential to shed light on the role of biodiversity on ecosystem functioning and on the influence of biotic and abiotic filters on the structure of species communities. Finally, we propose a general framework for applying these three functional diversity indices.

  6. Performance of different theories for the angular distribution of bremsstrahlung produced by keV electrons incident upon a target

    NASA Astrophysics Data System (ADS)

    Omar, Artur; Andreo, Pedro; Poludniowski, Gavin

    2018-07-01

    Different theories of the intrinsic bremsstrahlung angular distribution (i.e., the shape function) have been evaluated using Monte Carlo calculations for various target materials and incident electron energies between 20 keV and 300 keV. The shape functions considered were the plane-wave first Born approximation cross sections (i) 2BS [high-energy result, screened nucleus], (ii) 2BN [general result, bare nucleus], (iii) KM [2BS modified to emulate 2BN], and (iv) SIM [leading term of 2BN]; (v) expression based on partial-waves expansion, KQP; and (vi) a uniform spherical distribution, UNI [a common approximation in certain analytical models]. The shape function was found to have an important impact on the bremsstrahlung emerging from thin foil targets in which the incident electrons undergo few elastic scatterings before exiting the target material. For thick transmission and reflection targets the type of shape function had less importance, as the intrinsic bremsstrahlung angular distribution was masked by the diffuse directional distribution of multiple scattered electrons. Predictions made using the 2BN and KQP theories were generally in good agreement, suggesting that the effect of screening and the constraints of the Born approximation on the intrinsic angular distribution may be acceptable. The KM and SIM shape functions deviated notably from KQP for low electron energies (< 50 keV), while 2BS and UNI performed poorly over most of the energy range considered; the 2BS shape function was found to be too forward-focused in emission, while UNI was not forward-focused enough. The results obtained emphasize the importance of the intrinsic bremsstrahlung angular distribution for theoretical predictions of x-ray emission, which is relevant in various applied disciplines, including x-ray crystallography, electron-probe microanalysis, security and industrial inspection, medical imaging, as well as low- and medium (orthovoltage) energy radiotherapy.

  7. Maximum entropy approach to H -theory: Statistical mechanics of hierarchical systems

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Giovani L.; Salazar, Domingos S. P.; Macêdo, A. M. S.

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem—representing the region where the measurements are made—in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017), 10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  8. Distributed health data networks: a practical and preferred approach to multi-institutional evaluations of comparative effectiveness, safety, and quality of care.

    PubMed

    Brown, Jeffrey S; Holmes, John H; Shah, Kiran; Hall, Ken; Lazarus, Ross; Platt, Richard

    2010-06-01

    Comparative effectiveness research, medical product safety evaluation, and quality measurement will require the ability to use electronic health data held by multiple organizations. There is no consensus about whether to create regional or national combined (eg, "all payer") databases for these purposes, or distributed data networks that leave most Protected Health Information and proprietary data in the possession of the original data holders. Demonstrate functions of a distributed research network that supports research needs and also address data holders concerns about participation. Key design functions included strong local control of data uses and a centralized web-based querying interface. We implemented a pilot distributed research network and evaluated the design considerations, utility for research, and the acceptability to data holders of methods for menu-driven querying. We developed and tested a central, web-based interface with supporting network software. Specific functions assessed include query formation and distribution, query execution and review, and aggregation of results. This pilot successfully evaluated temporal trends in medication use and diagnoses at 5 separate sites, demonstrating some of the possibilities of using a distributed research network. The pilot demonstrated the potential utility of the design, which addressed the major concerns of both users and data holders. No serious obstacles were identified that would prevent development of a fully functional, scalable network. Distributed networks are capable of addressing nearly all anticipated uses of routinely collected electronic healthcare data. Distributed networks would obviate the need for centralized databases, thus avoiding numerous obstacles.

  9. Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.

    PubMed

    Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  10. The beta Burr type X distribution properties with application.

    PubMed

    Merovci, Faton; Khaleel, Mundher Abdullah; Ibrahim, Noor Akma; Shitan, Mahendran

    2016-01-01

    We develop a new continuous distribution called the beta-Burr type X distribution that extends the Burr type X distribution. The properties provide a comprehensive mathematical treatment of this distribution. Further more, various structural properties of the new distribution are derived, that includes moment generating function and the rth moment thus generalizing some results in the literature. We also obtain expressions for the density, moment generating function and rth moment of the order statistics. We consider the maximum likelihood estimation to estimate the parameters. Additionally, the asymptotic confidence intervals for the parameters are derived from the Fisher information matrix. Finally, simulation study is carried at under varying sample size to assess the performance of this model. Illustration the real dataset indicates that this new distribution can serve as a good alternative model to model positive real data in many areas.

  11. Stochastic analysis of particle movement over a dune bed

    USGS Publications Warehouse

    Lee, Baum K.; Jobson, Harvey E.

    1977-01-01

    Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)

  12. Wigner distribution function of Hermite-cosine-Gaussian beams through an apertured optical system.

    PubMed

    Sun, Dong; Zhao, Daomu

    2005-08-01

    By introducing the hard-aperture function into a finite sum of complex Gaussian functions, the approximate analytical expressions of the Wigner distribution function for Hermite-cosine-Gaussian beams passing through an apertured paraxial ABCD optical system are obtained. The analytical results are compared with the numerically integrated ones, and the absolute errors are also given. It is shown that the analytical results are proper and that the calculation speed for them is much faster than for the numerical results.

  13. Time-Frequency Based Instantaneous Frequency Estimation of Sparse Signals from an Incomplete Set of Samples

    DTIC Science & Technology

    2014-06-17

    100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with

  14. Measurements of the u valence quark distribution function in the proton and u quark fragmentation functions

    NASA Astrophysics Data System (ADS)

    Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badelek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I. G.; Blum, D.; Böhm, E.; De Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S. C.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffre, M.; Jacholkowska, A.; Janata, F.; Jancso, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, A.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettingale, J.; Pietrzyk, B.; Poensgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Scholz, M.; Schouten, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thenard, J. M.; Thompson, J. C.; De la Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.; European Muon Collaboration

    1989-07-01

    A new determination of the u valence quark distribution function in the proton is obtained from the analysis of identified charged pions, kaons, protons and antiprotons produced in muon-proton and muon-deuteron scattering. The comparison with results obtained in inclusive deep inelastic lepton-nucleon scattering provides a further test of the quark-parton model. The u quark fragmentation functions into positive and negative pions, kaons, protons and antiprotons are also measured.

  15. SimBOX: a scalable architecture for aggregate distributed command and control of spaceport and service constellation

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-08-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  16. SimBox: a simulation-based scalable architecture for distributed command and control of spaceport and service constellations

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-09-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  17. Generalized quantum Fokker-Planck, diffusion, and Smoluchowski equations with true probability distribution functions.

    PubMed

    Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar

    2002-05-01

    Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).

  18. A theory of local and global processes which affect solar wind electrons. 1: The origin of typical 1 AU velocity distribution functions: Steady state theory

    NASA Technical Reports Server (NTRS)

    Scudder, J. D.

    1978-01-01

    A detailed first principle kinetic theory for electrons which is neither a classical fluid treatment nor an exospheric calculation is presented. This theory illustrates the global and local properties of the solar wind expansion that shape the observed features of the electron distribution function, such as its bifurcation, its skewness and the differential temperatures of the thermal and suprathermal subpopulations. Coulomb collisions are substantial mediators of the interplanetary electron velocity distribution function and they place a zone for a bifurcation of the electron distribution function deep in the corona. The local cause and effect precept which permeates the physics of denser media is modified for electrons in the solar wind. The local form of transport laws and equations of state which apply to collision dominated plasmas are replaced with global relations that explicitly depend on the relative position of the observer to the boundaries of the system.

  19. Extended bidirectional reflectance distribution function for polarized light scattering from subsurface defects under a smooth surface.

    PubMed

    Shen, Jian; Deng, Degang; Kong, Weijin; Liu, Shijie; Shen, Zicai; Wei, Chaoyang; He, Hongbo; Shao, Jianda; Fan, Zhengxiu

    2006-11-01

    By introducing the scattering probability of a subsurface defect (SSD) and statistical distribution functions of SSD radius, refractive index, and position, we derive an extended bidirectional reflectance distribution function (BRDF) from the Jones scattering matrix. This function is applicable to the calculation for comparison with measurement of polarized light-scattering resulting from a SSD. A numerical calculation of the extended BRDF for the case of p-polarized incident light was performed by means of the Monte Carlo method. Our numerical results indicate that the extended BRDF strongly depends on the light incidence angle, the light scattering angle, and the out-of-plane azimuth angle. We observe a 180 degrees symmetry with respect to the azimuth angle. We further investigate the influence of the SSD density, the substrate refractive index, and the statistical distributions of the SSD radius and refractive index on the extended BRDF. For transparent substrates, we also find the dependence of the extended BRDF on the SSD positions.

  20. Analytic modeling of aerosol size distributions

    NASA Technical Reports Server (NTRS)

    Deepack, A.; Box, G. P.

    1979-01-01

    Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.

  1. Determination of Ionospheric Electron Density Profiles from Satellite UV (Ultraviolet) Emission Measurements, Fiscal Year 1984.

    DTIC Science & Technology

    1985-04-26

    distribution function. It is from the calculated distribution function that the photoelectron flux can be derived. The VUV daytime emissions that we are...OECLASSIPICATIONUOOWNdGRADING SCHEDULE Apoe o ulcrlae N/A distribution unlimited .PE RPORMING ORGANIZATION REPORT NUMBER41S( 5. MONITORING ORGANIZATION REPORT...EDP for systems users. This report considers the following ionospheric subregions: (a) the daytime mid- latitude ionosphere from, 90 to 1000 km, (b

  2. Electrostatic field and charge distribution in small charged dielectric droplets

    NASA Astrophysics Data System (ADS)

    Storozhev, V. B.

    2004-08-01

    The charge distribution in small dielectric droplets is calculated on the basis of continuum medium approximation. There are considered charged liquid spherical droplets of methanol in the range of nanometer sizes. The problem is solved by the following way. We find the free energy of some ion in dielectric droplet, which is a function of distribution of other ions in the droplet. The probability of location of the ion in some element of volume in the droplet is a function of its free energy in this element of volume. The same approach can be applied to other ions in the droplet. The obtained charge distribution differs considerably from the surface distribution. The curve of the charge distribution in the droplet as a function of radius has maximum near the surface. Relative concentration of charges in the vicinity of the center of the droplet does not equal to zero, and it is the higher, the less is the total charge of the droplet. According to the estimates the model is applicable if the droplet radius is larger than 10 nm.

  3. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  4. Effects of the reconnection electric field on crescent electron distribution functions in asymmetric guide field reconnection

    NASA Astrophysics Data System (ADS)

    Bessho, N.; Chen, L. J.; Hesse, M.; Wang, S.

    2017-12-01

    In asymmetric reconnection with a guide field in the Earth's magnetopause, electron motion in the electron diffusion region (EDR) is largely affected by the guide field, the Hall electric field, and the reconnection electric field. The electron motion in the EDR is neither simple gyration around the guide field nor simple meandering motion across the current sheet. The combined meandering motion and gyration has essential effects on particle acceleration by the in-plane Hall electric field (existing only in the magnetospheric side) and the out-of-plane reconnection electric field. We analyze electron motion and crescent-shaped electron distribution functions in the EDR in asymmetric guide field reconnection, and perform 2-D particle-in-cell (PIC) simulations to elucidate the effect of reconnection electric field on electron distribution functions. Recently, we have analytically expressed the acceleration effect due to the reconnection electric field on electron crescent distribution functions in asymmetric reconnection without a guide field (Bessho et al., Phys. Plasmas, 24, 072903, 2017). We extend the theory to asymmetric guide field reconnection, and predict the crescent bulge in distribution functions. Assuming 1D approximation of field variations in the EDR, we derive the time period of oscillatory electron motion (meandering + gyration) in the EDR. The time period is expressed as a hybrid of the meandering period and the gyro period. Due to the guide field, electrons not only oscillate along crescent-shaped trajectories in the velocity plane perpendicular to the antiparallel magnetic fields, but also move along parabolic trajectories in the velocity plane coplanar with magnetic field. The trajectory in the velocity space gradually shifts to the acceleration direction by the reconnection electric field as multiple bounces continue. Due to the guide field, electron distributions for meandering particles are bounded by two paraboloids (or hyperboloids) in the velocity space. We compare theory and PIC simulation results of the velocity shift of crescent distribution functions based on the derived time period of bounce motion in a guide field. Theoretical predictions are applied to electron distributions observed by MMS in magnetopause reconnection to estimate the reconnection electric field.

  5. Photoelectric dust levitation around airless bodies revised using realistic photoelectron velocity distributions

    NASA Astrophysics Data System (ADS)

    Senshu, H.; Kimura, H.; Yamamoto, T.; Wada, K.; Kobayashi, M.; Namiki, N.; Matsui, T.

    2015-10-01

    The velocity distribution function of photoelectrons from a surface exposed to solar UV radiation is fundamental to the electrostatic status of the surface. There is one and only one laboratory measurement of photoelectron emission from astronomically relevant material, but the energy distribution function was measured only in the emission angle from the normal to the surface of 0 to about π / 4. Therefore, the measured distribution is not directly usable to estimate the vertical structure of a photoelectric sheath above the surface. In this study, we develop a new analytical method to calculate an angle-resolved velocity distribution function of photoelectrons from the laboratory measurement data. We find that the photoelectric current and yield for lunar surface fines measured in a laboratory have been underestimated by a factor of two. We apply our new energy distribution function of photoelectrons to model the formation of photoelectric sheath above the surface of asteroid 433 Eros. Our model shows that a 0.1 μm-radius dust grain can librate above the surface of asteroid 433 Eros regardless of its launching velocity. In addition, a 0.5 μm grain can hover over the surface if the grain was launched at a velocity slower than 0.4 m/s, which is a more stringent condition for levitation than previous studies. However, a lack of high-energy data on the photoelectron energy distribution above 6 eV prevents us from firmly placing a constraint on the levitation condition.

  6. Estimating Age Distributions of Base Flow in Watersheds Underlain by Single and Dual Porosity Formations Using Groundwater Transport Simulation and Weighted Weibull Functions

    NASA Astrophysics Data System (ADS)

    Sanford, W. E.

    2015-12-01

    Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.

  7. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  8. Vacuum quantum stress tensor fluctuations: A diagonalization approach

    NASA Astrophysics Data System (ADS)

    Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.

    2018-01-01

    Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.

  9. Diameter distribution in a Brazilian tropical dry forest domain: predictions for the stand and species.

    PubMed

    Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.

  10. Measurements of neutral and ion velocity distribution functions in a Hall thruster

    NASA Astrophysics Data System (ADS)

    Svarnas, Panagiotis; Romadanov, Iavn; Diallo, Ahmed; Raitses, Yevgeny

    2015-11-01

    Hall thruster is a plasma device for space propulsion. It utilizes a cross-field discharge to generate a partially ionized weakly collisional plasma with magnetized electrons and non-magnetized ions. The ions are accelerated by the electric field to produce the thrust. There is a relatively large number of studies devoted to characterization of accelerated ions, including measurements of ion velocity distribution function using laser-induced fluorescence diagnostic. Interactions of these accelerated ions with neutral atoms in the thruster and the thruster plume is a subject of on-going studies, which require combined monitoring of ion and neutral velocity distributions. Herein, laser-induced fluorescence technique has been employed to study neutral and single-charged ion velocity distribution functions in a 200 W cylindrical Hall thruster operating with xenon propellant. An optical system is installed in the vacuum chamber enabling spatially resolved axial velocity measurements. The fluorescence signals are well separated from the plasma background emission by modulating the laser beam and using lock-in detectors. Measured velocity distribution functions of neutral atoms and ions at different operating parameters of the thruster are reported and analyzed. This work was supported by DOE contract DE-AC02-09CH11466.

  11. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  12. Unified halo-independent formalism from convex hulls for direct dark matter searches

    NASA Astrophysics Data System (ADS)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2017-12-01

    Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.

  13. Spatiotemporal reconstruction of list-mode PET data.

    PubMed

    Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M

    2002-04-01

    We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.

  14. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  15. Effects of the gap slope on the distribution of removal rate in Belt-MRF.

    PubMed

    Wang, Dekang; Hu, Haixiang; Li, Longxiang; Bai, Yang; Luo, Xiao; Xue, Donglin; Zhang, Xuejun

    2017-10-30

    Belt magnetorheological finishing (Belt-MRF) is a promising tool for large-optics processing. However, before using a spot, its shape should be designed and controlled by the polishing gap. Previous research revealed a remarkably nonlinear relationship between the removal function and normal pressure distribution. The pressure is nonlinearly related to the gap geometry, precluding prediction of the removal function given the polishing gap. Here, we used the concepts of gap slope and virtual ribbon to develop a model of removal profiles in Belt-MRF. Between the belt and the workpiece in the main polishing area, a gap which changes linearly along the flow direction was created using a flat-bottom magnet box. The pressure distribution and removal function were calculated. Simulations were consistent with experiments. Different removal functions, consistent with theoretical calculations, were obtained by adjusting the gap slope. This approach allows to predict removal functions in Belt-MRF.

  16. Modelling population distribution using remote sensing imagery and location-based data

    NASA Astrophysics Data System (ADS)

    Song, J.; Prishchepov, A. V.

    2017-12-01

    Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.

  17. POLYANA-A tool for the calculation of molecular radial distribution functions based on Molecular Dynamics trajectories

    NASA Astrophysics Data System (ADS)

    Dimitroulis, Christos; Raptis, Theophanes; Raptis, Vasilios

    2015-12-01

    We present an application for the calculation of radial distribution functions for molecular centres of mass, based on trajectories generated by molecular simulation methods (Molecular Dynamics, Monte Carlo). When designing this application, the emphasis was placed on ease of use as well as ease of further development. In its current version, the program can read trajectories generated by the well-known DL_POLY package, but it can be easily extended to handle other formats. It is also very easy to 'hack' the program so it can compute intermolecular radial distribution functions for groups of interaction sites rather than whole molecules.

  18. The Reliability Estimation for the Open Function of Cabin Door Affected by the Imprecise Judgment Corresponding to Distribution Hypothesis

    NASA Astrophysics Data System (ADS)

    Yu, Z. P.; Yue, Z. F.; Liu, W.

    2018-05-01

    With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.

  19. A descriptive model of resting-state networks using Markov chains.

    PubMed

    Xie, H; Pal, R; Mitra, S

    2016-08-01

    Resting-state functional connectivity (RSFC) studies considering pairwise linear correlations have attracted great interests while the underlying functional network structure still remains poorly understood. To further our understanding of RSFC, this paper presents an analysis of the resting-state networks (RSNs) based on the steady-state distributions and provides a novel angle to investigate the RSFC of multiple functional nodes. This paper evaluates the consistency of two networks based on the Hellinger distance between the steady-state distributions of the inferred Markov chain models. The results show that generated steady-state distributions of default mode network have higher consistency across subjects than random nodes from various RSNs.

  20. Space Station environmental control and life support system distribution and loop closure studies

    NASA Technical Reports Server (NTRS)

    Humphries, William R.; Reuter, James L.; Schunk, Richard G.

    1986-01-01

    The NASA Space Station's environmental control and life support system (ECLSS) encompasses functional elements concerned with temperature and humidity control, atmosphere control and supply, atmosphere revitalization, fire detection and suppression, water recovery and management, waste management, and EVA support. Attention is presently given to functional and physical module distributions of the ECLSS among these elements, with a view to resource requirements and safety implications. A strategy of physical distribution coupled with functional centralization is for the air revitalization and water reclamation systems. Also discussed is the degree of loop closure desirable in the initial operational capability status Space Station's oxygen and water reclamation loops.

  1. Grassmann phase space theory and the Jaynes–Cummings model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalton, B.J., E-mail: bdalton@swin.edu.au; Centre for Atom Optics and Ultrafast Spectroscopy, Swinburne University of Technology, Melbourne, Victoria 3122; Garraway, B.M.

    2013-07-15

    The Jaynes–Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherentmore » state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes–Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker–Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker–Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes–Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker–Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker–Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions–that are also equivalent to the canonical Grassmann distribution function–to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum–atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes–Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum–atom optics. -- Highlights: •Novel phase space theory of the Jaynes–Cummings model using Grassmann variables. •Fokker–Planck equations solved analytically. •Results agree with the standard quantum optics treatment. •Grassmann phase space theory applicable to fermion many-body problems.« less

  2. A class of Fourier integrals based on the electric potential of an elongated dipole.

    PubMed

    Skianis, Georgios Aim

    2014-01-01

    In the present paper the closed expressions of a class of non tabulated Fourier integrals are derived. These integrals are associated with a group of functions at space domain, which represent the electric potential of a distribution of elongated dipoles which are perpendicular to a flat surface. It is shown that the Fourier integrals are produced by the Fourier transform of the Green's function of the potential of the dipole distribution, times a definite integral in which the distribution of the polarization is involved. Therefore the form of this distribution controls the expression of the Fourier integral. Introducing various dipole distributions, the respective Fourier integrals are derived. These integrals may be useful in the quantitative interpretation of electric potential anomalies produced by elongated dipole distributions, at spatial frequency domain.

  3. Effect of quantum dispersion on the radial distribution function of a one-component sticky-hard-sphere fluid

    NASA Astrophysics Data System (ADS)

    Fantoni, Riccardo

    2018-04-01

    In this short communication we present a possible scheme to study the radial distribution function of the quantum slightly polydisperse Baxter sticky hard sphere liquid at finite temperature thorugh a semi-analytical method devised by Chandler and Wolynes.

  4. Estimation of Reliability Coefficients Using the Test Information Function and Its Modifications.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1994-01-01

    The reliability coefficient is predicted from the test information function (TIF) or two modified TIF formulas and a specific trait distribution. Examples illustrate the variability of the reliability coefficient across different trait distributions, and results are compared with empirical reliability coefficients. (SLD)

  5. Final state interactions and inclusive nuclear collisions

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Dubey, Rajendra R.

    1993-01-01

    A scattering formalism is developed in a multiple scattering model to describe inclusive momentum distributions for high-energy projectiles. The effects of final state interactions on response functions and momentum distributions are investigated. Calculations for high-energy protons that include shell model response functions are compared with experiments.

  6. BINARY CORRELATIONS IN IONIZED GASES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balescu, R.; Taylor, H.S.

    1961-01-01

    An equation of evolution for the binary distribution function in a classical homogeneous, nonequilibrium plasma was derived. It is shown that the asymptotic (long-time) solution of this equation is the Debye distribution, thus providing a rigorous dynamical derivation of the equilibrium distribution. This proof is free from the fundamental conceptual difficulties of conventional equilibrium derivations. Out of equilibrium, a closed formula was obtained for the long living correlations, in terms of the momentum distribution function. These results should form an appropriate starting point for a rigorous theory of transport phenomena in plasmas, including the effect of molecular correlations. (auth)

  7. Investigation of pore size and energy distributions by statistical physics formalism applied to agriculture products

    NASA Astrophysics Data System (ADS)

    Aouaini, Fatma; Knani, Salah; Yahia, Manel Ben; Bahloul, Neila; Ben Lamine, Abdelmottaleb; Kechaou, Nabil

    2015-12-01

    In this paper, we present a new investigation that allows determining the pore size distribution (PSD) in a porous medium. This PSD is achieved by using the desorption isotherms of four varieties of olive leaves. This is by the means of statistical physics formalism and Kelvin's law. The results are compared with those obtained with scanning electron microscopy. The effect of temperature on the distribution function of pores has been studied. The influence of each parameter on the PSD is interpreted. A similar function of adsorption energy distribution, AED, is deduced from the PSD.

  8. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  9. Angular distribution of photoelectrons from atomic oxygen, nitrogen and carbon. [in upper atmosphere

    NASA Technical Reports Server (NTRS)

    Manson, S. J.; Kennedy, D. J.; Starace, A. F.; Dill, D.

    1974-01-01

    The angular distributions of photoelectrons from atomic oxygen, nitrogen, and carbon are calculated. Both Hartree-Fock and Hartree-Slater (Herman-Skillman) wave functions are used for oxygen, and the agreement is excellent; thus only Hartree-Slater functions are used for carbon and nitrogen. The pitch-angle distribution of photoelectrons is discussed, and it is shown that previous approximations of energy-independent isotropic or sin squared theta distributions are at odds with the authors' results, which vary with energy. This variation with energy is discussed, as is the reliability of these calculations.

  10. Normal families and value distribution in connection with composite functions

    NASA Astrophysics Data System (ADS)

    Clifford, E. F.

    2005-12-01

    We prove a value distribution result which has several interesting corollaries. Let , let and let f be a transcendental entire function with order less than 1/2. Then for every nonconstant entire function g, we have that (f[circle, open]g)(k)-[alpha] has infinitely many zeros. This result also holds when k=1, for every transcendental entire function g. We also prove the following result for normal families. Let , let f be a transcendental entire function with [rho](f)<1/k, and let a0,...,ak-1,a be analytic functions in a domain [Omega]. Then the family of analytic functions g such that in [Omega], is a normal family.

  11. Methymercury Formation in Marine and Freshwater Systems: Sediment Characteristics, Microbial Activity and SRB Phylogeny Control Formation Rates and Food-Chain Exposure

    NASA Astrophysics Data System (ADS)

    King, J. K.; Saunders, F. M.

    2004-05-01

    Mercury research in freshwater and marine systems suggests that sediment characteristics such as organic substrate, mercury speciation, and sulfate/sulfide concentrations influence availability of inorganic mercury for methylation. Similarly, sediment characteristics also influence sulfate-reducing bacterial (SRB) respiration as well as the presence/distribution of phylogenetic groups responsible for mercury methylation. Our work illustrates that the process of methylmercury formation in freshwater and marine systems are not dissimilar. Rather, the same geochemical parameters and SRB phylogenetic groups determine the propensity for methylmercury formation and are applicable in both fresh- and marine-water systems. The presentation will include our integration of sediment geochemical and microbial parameters affecting mercury methylation in specific freshwater and marine systems. Constructed wetlands planted with Schoenoplectus californicus and amended with gypsum (CaSO4) have demonstrated a capacity to remove inorganic mercury from industrial outfalls. However, bioaccumulation studies of periphyton, eastern mosquitofish (Gambusia holbrooki) and lake chubsucker (Erimyzon sucetta) were conducted in order to ascertain the availability of wetland-generated methylmercury to biota. Total mercury concentrations in mosquitofish from non-sulfate treated controls and the reference location were significantly lower than those from the low and high sulfate treatments while mean total mercury concentrations in lake chubsuckers were also significantly elevated in the high sulfate treatment compared to the low sulfate, control and reference populations. Methylmercury concentrations in periphyton also corresponded with mercury levels found in the tissue of the lake chubsuckers, and these findings fit well given the trophic levels identified for both species of fish. Overall, data from this study suggest that the initial use of gypsum to accelerate the maturity of a constructed wetland may not prove beneficial with respect to the ultimate objective of mercury sequestration. Current regulations place strict requirements on dredge material placed in confined disposal facilities (CDF) as well as associated effluent waters. Although regulatory guidelines typically address total mercury concentrations, historical data specific to bioaccumulation of mercury suggest that methylmercury concentrations found in sediments and water require attention. Resource agencies are now interested in knowing the likelihood of methylmercury formation in dredge spoil since birds and fish are frequently found feeding in CDFs and the associated mixing zones. Mechanisms that influence methylmercury formation in sediments dictate that dredging of mercury-containing sediments will result in an increased availability of inorganic mercury for methylation. Prior to dredging, the undisturbed sediment contains inorganic mercury complexed to sulfide in an insoluble, unavailable form. However, hydraulic or clamshell dredging can result in an oxidation of sediments and remobilization of mercury-sulfide species thus increasing its availability for methylation. Once sediments are disposed in a CDF, sulfate-reducing bacteria profiles are re-established vertically in dredge spoil and methylmercury synthesis can readily occur.

  12. Supernova rates from the SUDARE VST-OmegaCAM search. I. Rates per unit volume

    NASA Astrophysics Data System (ADS)

    Cappellaro, E.; Botticella, M. T.; Pignata, G.; Grado, A.; Greggio, L.; Limatola, L.; Vaccari, M.; Baruffolo, A.; Benetti, S.; Bufano, F.; Capaccioli, M.; Cascone, E.; Covone, G.; De Cicco, D.; Falocco, S.; Della Valle, M.; Jarvis, M.; Marchetti, L.; Napolitano, N. R.; Paolillo, M.; Pastorello, A.; Radovich, M.; Schipani, P.; Spiro, S.; Tomasella, L.; Turatto, M.

    2015-12-01

    Aims: We describe the observing strategy, data reduction tools, and early results of a supernova (SN) search project, named SUDARE, conducted with the ESO VST telescope, which is aimed at measuring the rate of the different types of SNe in the redshift range 0.2 < z < 0.8. Methods: The search was performed in two of the best studied extragalactic fields, CDFS and COSMOS, for which a wealth of ancillary data are available in the literature or in public archives. We developed a pipeline for the data reduction and rapid identification of transients. As a result of the frequent monitoring of the two selected fields, we obtained light curve and colour information for the transients sources that were used to select and classify SNe by means of an especially developed tool. To accurately characterise the surveyed stellar population, we exploit public data and our own observations to measure the galaxy photometric redshifts and rest frame colours. Results: We obtained a final sample of 117 SNe, most of which are SN Ia (57%) with the remaining ones being core collapse events, of which 44% are type II, 22% type IIn and 34% type Ib/c. To link the transients, we built a catalogue of ~1.3 × 105 galaxies in the redshift range 0 < z ≤ 1, with a limiting magnitude KAB = 23.5 mag. We measured the SN rate per unit volume for SN Ia and core collapse SNe in different bins of redshifts. The values are consistent with other measurements from the literature. Conclusions: The dispersion of the rate measurements for SNe-Ia is comparable to the scatter of the theoretical tracks for single degenerate (SD) and double degenerate (DD) binary systems models, therefore it is not possible to disentangle among the two different progenitor scenarios. However, among the three tested models (SD and the two flavours of DD that either have a steep DDC or a wide DDW delay time distribution), the SD appears to give a better fit across the whole redshift range, whereas the DDC better matches the steep rise up to redshift ~1.2. The DDW instead appears to be less favoured. Unlike recent claims, the core collapse SN rate is fully consistent with the prediction that is based on recent estimates of star formation history and standard progenitor mass range. Based on observations made with ESO telescopes at the Paranal Observatory under programme ID 088.D-4006, 088.D-4007, 089.D-0244, 089.D-0248, 090.D-0078, 090.D-0079, 088.D-4013, 089.D-0250, 090.D-0081.Appendix A is available in electronic form at http://www.aanda.org

  13. GOODS-Herschel: dust attenuation properties of UV selected high redshift galaxies

    NASA Astrophysics Data System (ADS)

    Buat, V.; Noll, S.; Burgarella, D.; Giovannoli, E.; Charmandaris, V.; Pannella, M.; Hwang, H. S.; Elbaz, D.; Dickinson, M.; Magdis, G.; Reddy, N.; Murphy, E. J.

    2012-09-01

    Context. Dust attenuation in galaxies is poorly known, especially at high redshift. And yet the amount of dust attenuation is a key parameter to deduce accurate star formation rates from ultraviolet (UV) rest-frame measurements. The wavelength dependence of the dust attenuation is also of fundamental importance to interpret the observed spectral energy distributions (SEDs) and to derive photometric redshifts or physical properties of galaxies. Aims: We want to study dust attenuation at UV wavelengths at high redshift, where the UV is redshifted to the observed visible light wavelength range. In particular, we search for a UV bump and related implications for dust attenuation determinations. Methods: We use photometric data in the Chandra Deep Field South (CDFS), obtained in intermediate and broad band filters by the MUSYC project, to sample the UV rest-frame of 751 galaxies with 0.95 < z < 2.2. When available, infrared (IR) Herschel/PACS data from the GOODS-Herschel project, coupled with Spitzer/MIPS measurements, are used to estimate the dust emission and to constrain dust attenuation. The SED of each source is fit using the CIGALE code. The amount of dust attenuation and the characteristics of the dust attenuation curve are obtained as outputs of the SED fitting process, together with other physical parameters linked to the star formation history. Results: The global amount of dust attenuation at UV wavelengths is found to increase with stellar mass and to decrease as UV luminosity increases. A UV bump at 2175 Å is securely detected in 20% of the galaxies, and the mean amplitude of the bump for the sample is similar to that observed in the extinction curve of the LMC supershell region. This amplitude is found to be lower in galaxies with very high specific star formation rates, and 90% of the galaxies exhibiting a secure bump are at z < 1.5. The attenuation curve is confirmed to be steeper than that of local starburst galaxies for 20% of the galaxies. The large dispersion found for these two parameters describing the attenuation law is likely to reflect a wide diversity of attenuation laws among galaxies. The relations between dust attenuation, IR-to-UV flux ratio, and the slope of the UV continuum are derived for the mean attenuation curve found for our sample. Deviations from the average trends are found to correlate with the age of the young stellar population and the shape of the attenuation curve. Table of multi-colour photometry for the 751 galaxies is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/545/A141

  14. Self spectrum window method in wigner-ville distribution.

    PubMed

    Liu, Zhongguo; Liu, Changchun; Liu, Boqiang; Lv, Yangsheng; Lei, Yinsheng; Yu, Mengsun

    2005-01-01

    Wigner-Ville distribution (WVD) is an important type of time-frequency analysis in biomedical signal processing. The cross-term interference in WVD has a disadvantageous influence on its application. In this research, the Self Spectrum Window (SSW) method was put forward to suppress the cross-term interference, based on the fact that the cross-term and auto-WVD- terms in integral kernel function are orthogonal. With the Self Spectrum Window (SSW) algorithm, a real auto-WVD function was used as a template to cross-correlate with the integral kernel function, and the Short Time Fourier Transform (STFT) spectrum of the signal was used as window function to process the WVD in time-frequency plane. The SSW method was confirmed by computer simulation with good analysis results. Satisfactory time- frequency distribution was obtained.

  15. Continuum-kinetic approach to sheath simulations

    NASA Astrophysics Data System (ADS)

    Cagas, Petr; Hakim, Ammar; Srinivasan, Bhuvana

    2016-10-01

    Simulations of sheaths are performed using a novel continuum-kinetic model with collisions including ionization/recombination. A discontinuous Galerkin method is used to directly solve the Boltzmann-Poisson system to obtain a particle distribution function. Direct discretization of the distribution function has advantages of being noise-free compared to particle-in-cell methods. The distribution function, which is available at each node of the configuration space, can be readily used to calculate the collision integrals in order to get ionization and recombination operators. Analytical models are used to obtain the cross-sections as a function of energy. Results will be presented incorporating surface physics with a classical sheath in Hall thruster-relevant geometry. This work was sponsored by the Air Force Office of Scientific Research under Grant Number FA9550-15-1-0193.

  16. Behavior of Triple Langmuir Probes in Non-Equilibrium Plasmas

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Ratcliffe, Alicia C.

    2018-01-01

    The triple Langmuir probe is an electrostatic probe in which three probe tips collect current when inserted into a plasma. The triple probe differs from a simple single Langmuir probe in the nature of the voltage applied to the probe tips. In the single probe, a swept voltage is applied to the probe tip to acquire a waveform showing the collected current as a function of applied voltage (I-V curve). In a triple probe three probe tips are electrically coupled to each other with constant voltages applied between each of the tips. The voltages are selected such that they would represent three points on the single Langmuir probe I-V curve. Elimination of the voltage sweep makes it possible to measure time-varying plasma properties in transient plasmas. Under the assumption of a Maxwellian plasma, one can determine the time-varying plasma temperature T(sub e)(t) and number density n(sub e)(t) from the applied voltage levels and the time-histories of the collected currents. In the present paper we examine the theory of triple probe operation, specifically focusing on the assumption of a Maxwellian plasma. Triple probe measurements have been widely employed for a number of pulsed and timevarying plasmas, including pulsed plasma thrusters (PPTs), dense plasma focus devices, plasma flows, and fusion experiments. While the equilibrium assumption may be justified for some applications, it is unlikely that it is fully justifiable for all pulsed and time-varying plasmas or for all times during the pulse of a plasma device. To examine a simple non-equilibrium plasma case, we return to basic governing equations of probe current collection and compute the current to the probes for a distribution function consisting of two Maxwellian distributions with different temperatures (the two-temperature Maxwellian). A variation of this method is also employed, where one of the Maxwellians is offset from zero (in velocity space) to add a suprathermal beam of electrons to the tail of the main Maxwellian distribution (the bump-on-the-tail distribution function). For a range of parameters in these non-Maxwellian distributions, we compute the current collection to the probes. We compare the distribution function that was assumed a priori with the distribution function one would infer when applying standard triple probe theory to analyze the collected currents. For the assumed class of non-Maxwellian distribution functions this serves to illustrate the effect a non-Maxwellian plasma would have on results interpreted using the equilibrium triple probe current collection theory, allowing us to state the magnitudes of these deviations as a function of the assumed distribution function properties.

  17. Distributed Compression in Camera Sensor Networks

    DTIC Science & Technology

    2006-02-13

    complicated in this context. This effort will make use of the correlation structure of the data given by the plenoptic function n the case of multi-camera...systems. In many cases the structure of the plenoptic function can be estimated without requiring inter-sensor communications, but by using some a...priori global geometrical information. Once the structure of the plenoptic function has been predicted, it is possible to develop specific distributed

  18. Ray tracing the Wigner distribution function for optical simulations

    NASA Astrophysics Data System (ADS)

    Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Joerg; Urbach, Paul

    2018-01-01

    We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems but produces unphysical results in the presence of aberrations. The cause of these anomalies is explained using an analytical model.

  19. On push-forward representations in the standard gyrokinetic model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyato, N., E-mail: miyato.naoaki@jaea.go.jp; Yagi, M.; Scott, B. D.

    2015-01-15

    Two representations of fluid moments in terms of a gyro-center distribution function and gyro-center coordinates, which are called push-forward representations, are compared in the standard electrostatic gyrokinetic model. In the representation conventionally used to derive the gyrokinetic Poisson equation, the pull-back transformation of the gyro-center distribution function contains effects of the gyro-center transformation and therefore electrostatic potential fluctuations, which is described by the Poisson brackets between the distribution function and scalar functions generating the gyro-center transformation. Usually, only the lowest order solution of the generating function at first order is considered to explicitly derive the gyrokinetic Poisson equation. This ismore » true in explicitly deriving representations of scalar fluid moments with polarization terms. One also recovers the particle diamagnetic flux at this order because it is associated with the guiding-center transformation. However, higher-order solutions are needed to derive finite Larmor radius terms of particle flux including the polarization drift flux from the conventional representation. On the other hand, the lowest order solution is sufficient for the other representation, in which the gyro-center transformation part is combined with the guiding-center one and the pull-back transformation of the distribution function does not appear.« less

  20. Equilibration in the time-dependent Hartree-Fock approach probed with the Wigner distribution function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loebl, N.; Maruhn, J. A.; Reinhard, P.-G.

    2011-09-15

    By calculating the Wigner distribution function in the reaction plane, we are able to probe the phase-space behavior in the time-dependent Hartree-Fock scheme during a heavy-ion collision in a consistent framework. Various expectation values of operators are calculated by evaluating the corresponding integrals over the Wigner function. In this approach, it is straightforward to define and analyze quantities even locally. We compare the Wigner distribution function with the smoothed Husimi distribution function. Different reaction scenarios are presented by analyzing central and noncentral {sup 16}O +{sup 16}O and {sup 96}Zr +{sup 132}Sn collisions. Although we observe strong dissipation in the timemore » evolution of global observables, there is no evidence for complete equilibration in the local analysis of the Wigner function. Because the initial phase-space volumes of the fragments barely merge and mean values of the observables are conserved in fusion reactions over thousands of fm/c, we conclude that the time-dependent Hartree-Fock method provides a good description of the early stage of a heavy-ion collision but does not provide a mechanism to change the phase-space structure in a dramatic way necessary to obtain complete equilibration.« less

  1. Ecological variation in South American geophagine cichlids arose during an early burst of adaptive morphological and functional evolution

    PubMed Central

    Arbour, Jessica Hilary; López-Fernández, Hernán

    2013-01-01

    Diversity and disparity are unequally distributed both phylogenetically and geographically. This uneven distribution may be owing to differences in diversification rates between clades resulting from processes such as adaptive radiation. We examined the rate and distribution of evolution in feeding biomechanics in the extremely diverse and continentally distributed South American geophagine cichlids. Evolutionary patterns in multivariate functional morphospace were examined using a phylomorphospace approach, disparity-through-time analyses and by comparing Brownian motion (BM) and adaptive peak evolutionary models using maximum likelihood. The most species-rich and functionally disparate clade (CAS) expanded more efficiently in morphospace and evolved more rapidly compared with both BM expectations and its sister clade (GGD). Members of the CAS clade also exhibited an early burst in functional evolution that corresponds to the development of modern ecological roles and may have been related to the colonization of a novel adaptive peak characterized by fast oral jaw mechanics. Furthermore, reduced ecological opportunity following this early burst may have restricted functional evolution in the GGD clade, which is less species-rich and more ecologically specialized. Patterns of evolution in ecologically important functional traits are consistent with a pattern of adaptive radiation within the most diverse clade of Geophagini. PMID:23740780

  2. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  3. Deployment strategy for battery energy storage system in distribution network based on voltage violation regulation

    NASA Astrophysics Data System (ADS)

    Wu, H.; Zhou, L.; Xu, T.; Fang, W. L.; He, W. G.; Liu, H. M.

    2017-11-01

    In order to improve the situation of voltage violation caused by the grid-connection of photovoltaic (PV) system in a distribution network, a bi-level programming model is proposed for battery energy storage system (BESS) deployment. The objective function of inner level programming is to minimize voltage violation, with the power of PV and BESS as the variables. The objective function of outer level programming is to minimize the comprehensive function originated from inner layer programming and all the BESS operating parameters, with the capacity and rated power of BESS as the variables. The differential evolution (DE) algorithm is applied to solve the model. Based on distribution network operation scenarios with photovoltaic generation under multiple alternative output modes, the simulation results of IEEE 33-bus system prove that the deployment strategy of BESS proposed in this paper is well adapted to voltage violation regulation invariable distribution network operation scenarios. It contributes to regulating voltage violation in distribution network, as well as to improve the utilization of PV systems.

  4. Two approximations of the present value distribution of a disability annuity

    NASA Astrophysics Data System (ADS)

    Spreeuw, Jaap

    2006-02-01

    The distribution function of the present value of a cash flow can be approximated by means of a distribution function of a random variable, which is also the present value of a sequence of payments, but with a simpler structure. The corresponding random variable has the same expectation as the random variable corresponding to the original distribution function and is a stochastic upper bound of convex order. A sharper upper bound can be obtained if more information about the risk is available. In this paper, it will be shown that such an approach can be adopted for disability annuities (also known as income protection policies) in a three state model under Markov assumptions. Benefits are payable during any spell of disability whilst premiums are only due whenever the insured is healthy. The quality of the two approximations is investigated by comparing the distributions obtained with the one derived from the algorithm presented in the paper by Hesselager and Norberg [Insurance Math. Econom. 18 (1996) 35-42].

  5. Voltage stress effects on microcircuit accelerated life test failure rates

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.

    1976-01-01

    The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.

  6. Observations of a free-energy source for intense electrostatic waves. [in upper atmosphere near upper hybrid resonance frequency

    NASA Technical Reports Server (NTRS)

    Kurth, W. S.; Frank, L. A.; Gurnett, D. A.; Burek, B. G.; Ashour-Abdalla, M.

    1980-01-01

    Significant progress has been made in understanding intense electrostatic waves near the upper hybrid resonance frequency in terms of the theory of multiharmonic cyclotron emission using a classical loss-cone distribution function as a model. Recent observations by Hawkeye 1 and GEOS 1 have verified the existence of loss-cone distributions in association with the intense electrostatic wave events, however, other observations by Hawkeye and ISEE have indicated that loss cones are not always observable during the wave events, and in fact other forms of free energy may also be responsible for the instability. Now, for the first time, a positively sloped feature in the perpendicular distribution function has been uniquely identified with intense electrostatic wave activity. Correspondingly, we suggest that the theory is flexible under substantial modifications of the model distribution function.

  7. Transmission of ˜ 10 keV electron beams through thin ceramic foils: Measurements and Monte Carlo simulations of electron energy distribution functions

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Heindl, T.; Skrobol, C.; Wieser, J.; Krücken, R.; Ulrich, A.

    2008-07-01

    Electron beams with particle energy of ~10 keV were sent through 300 nm thick ceramic (Si3N4 + SiO2) foils and the resulting electron energy distribution functions were recorded using a retarding grid technique. The results are compared with Monte Carlo simulations performed with two publicly available packages, Geant4 and Casino v2.42. It is demonstrated that Geant4, unlike Casino, provides electron energy distribution functions very similar to the experimental distributions. Both simulation packages provide a quite precise average energy of transmitted electrons: we demonstrate that the maximum uncertainty of the calculated values of the average energy is 6% for Geant4 and 8% for Casino, taking into account all systematic uncertainties and the discrepancies in the experimental and simulated data.

  8. Analysis of scattering statistics and governing distribution functions in optical coherence tomography.

    PubMed

    Sugita, Mitsuro; Weatherbee, Andrew; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-07-01

    The probability density function (PDF) of light scattering intensity can be used to characterize the scattering medium. We have recently shown that in optical coherence tomography (OCT), a PDF formalism can be sensitive to the number of scatterers in the probed scattering volume and can be represented by the K-distribution, a functional descriptor for non-Gaussian scattering statistics. Expanding on this initial finding, here we examine polystyrene microsphere phantoms with different sphere sizes and concentrations, and also human skin and fingernail in vivo. It is demonstrated that the K-distribution offers an accurate representation for the measured OCT PDFs. The behavior of the shape parameter of K-distribution that best fits the OCT scattering results is investigated in detail, and the applicability of this methodology for biological tissue characterization is demonstrated and discussed.

  9. Dynamics of modulated beams in spectral domain

    DOE PAGES

    Yampolsky, Nikolai A.

    2017-07-16

    General formalism for describing dynamics of modulated beams along linear beamlines is developed. We describe modulated beams with spectral distribution function which represents Fourier transform of the conventional beam distribution function in the 6-dimensional phase space. The introduced spectral distribution function is localized in some region of the spectral domain for nearly monochromatic modulations. It can be characterized with a small number of typical parameters such as the lowest order moments of the spectral distribution. We study evolution of the modulated beams in linear beamlines and find that characteristic spectral parameters transform linearly. The developed approach significantly simplifies analysis ofmore » various schemes proposed for seeding X-ray free electron lasers. We use this approach to study several recently proposed schemes and find the bandwidth of the output bunching in each case.« less

  10. An Estimation of the Gamma-Ray Burst Afterglow Apparent Optical Brightness Distribution Function

    NASA Astrophysics Data System (ADS)

    Akerlof, Carl W.; Swan, Heather F.

    2007-12-01

    By using recent publicly available observational data obtained in conjunction with the NASA Swift gamma-ray burst (GRB) mission and a novel data analysis technique, we have been able to make some rough estimates of the GRB afterglow apparent optical brightness distribution function. The results suggest that 71% of all burst afterglows have optical magnitudes with mR<22.1 at 1000 s after the burst onset, the dimmest detected object in the data sample. There is a strong indication that the apparent optical magnitude distribution function peaks at mR~19.5. Such estimates may prove useful in guiding future plans to improve GRB counterpart observation programs. The employed numerical techniques might find application in a variety of other data analysis problems in which the intrinsic distributions must be inferred from a heterogeneous sample.

  11. The use of discontinuities and functional groups to assess relative resilience in complex systems

    USGS Publications Warehouse

    Allen, Craig R.; Gunderson, Lance; Johnson, A.R.

    2005-01-01

    It is evident when the resilience of a system has been exceeded and the system qualitatively changed. However, it is not clear how to measure resilience in a system prior to the demonstration that the capacity for resilient response has been exceeded. We argue that self-organizing human and natural systems are structured by a relatively small set of processes operating across scales in time and space. These structuring processes should generate a discontinuous distribution of structures and frequencies, where discontinuities mark the transition from one scale to another. Resilience is not driven by the identity of elements of a system, but rather by the functions those elements provide, and their distribution within and across scales. A self-organizing system that is resilient should maintain patterns of function within and across scales despite the turnover of specific elements (for example, species, cities). However, the loss of functions, or a decrease in functional representation at certain scales will decrease system resilience. It follows that some distributions of function should be more resilient than others. We propose that the determination of discontinuities, and the quantification of function both within and across scales, produce relative measures of resilience in ecological and other systems. We describe a set of methods to assess the relative resilience of a system based upon the determination of discontinuities and the quantification of the distribution of functions in relation to those discontinuities. ?? 2005 Springer Science+Business Media, Inc.

  12. Intensive Versus Distributed Aphasia Therapy: A Nonrandomized, Parallel-Group, Dosage-Controlled Study.

    PubMed

    Dignam, Jade; Copland, David; McKinnon, Eril; Burfein, Penni; O'Brien, Kate; Farrell, Anna; Rodriguez, Amy D

    2015-08-01

    Most studies comparing different levels of aphasia treatment intensity have not controlled the dosage of therapy provided. Consequently, the true effect of treatment intensity in aphasia rehabilitation remains unknown. Aphasia Language Impairment and Functioning Therapy is an intensive, comprehensive aphasia program. We investigated the efficacy of a dosage-controlled trial of Aphasia Language Impairment and Functioning Therapy, when delivered in an intensive versus distributed therapy schedule, on communication outcomes in participants with chronic aphasia. Thirty-four adults with chronic, poststroke aphasia were recruited to participate in an intensive (n=16; 16 hours per week; 3 weeks) versus distributed (n=18; 6 hours per week; 8 weeks) therapy program. Treatment included 48 hours of impairment, functional, computer, and group-based aphasia therapy. Distributed therapy resulted in significantly greater improvements on the Boston Naming Test when compared with intensive therapy immediately post therapy (P=0.04) and at 1-month follow-up (P=0.002). We found comparable gains on measures of participants' communicative effectiveness, communication confidence, and communication-related quality of life for the intensive and distributed treatment conditions at post-therapy and 1-month follow-up. Aphasia Language Impairment and Functioning Therapy resulted in superior clinical outcomes on measures of language impairment when delivered in a distributed versus intensive schedule. The therapy progam had a positive effect on participants' functional communication and communication-related quality of life, regardless of treatment intensity. These findings contribute to our understanding of the effect of treatment intensity in aphasia rehabilitation and have important clinical implications for service delivery models. © 2015 American Heart Association, Inc.

  13. Optimal startup control of a jacketed tubular reactor.

    NASA Technical Reports Server (NTRS)

    Hahn, D. R.; Fan, L. T.; Hwang, C. L.

    1971-01-01

    The optimal startup policy of a jacketed tubular reactor, in which a first-order, reversible, exothermic reaction takes place, is presented. A distributed maximum principle is presented for determining weak necessary conditions for optimality of a diffusional distributed parameter system. A numerical technique is developed for practical implementation of the distributed maximum principle. This involves the sequential solution of the state and adjoint equations, in conjunction with a functional gradient technique for iteratively improving the control function.

  14. Unbiased estimators for spatial distribution functions of classical fluids

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.; Jarzynski, Christopher

    2005-01-01

    We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.

  15. A dynamic model of the marriage market-part 1: matching algorithm based on age preference and availability.

    PubMed

    Matthews, A P; Garenne, M L

    2013-09-01

    The matching algorithm in a dynamic marriage market model is described in this first of two companion papers. Iterative Proportional Fitting is used to find a marriage function (an age distribution of new marriages for both sexes), in a stable reference population, that is consistent with the one-sex age distributions of new marriages, and includes age preference. The one-sex age distributions (which are the marginals of the two-sex distribution) are based on the Picrate model, and age preference on a normal distribution, both of which may be adjusted by choice of parameter values. For a population that is perturbed from the reference state, the total number of new marriages is found as the harmonic mean of target totals for men and women obtained by applying reference population marriage rates to the perturbed population. The marriage function uses the age preference function, assumed to be the same for the reference and the perturbed populations, to distribute the total number of new marriages. The marriage function also has an availability factor that varies as the population changes with time, where availability depends on the supply of unmarried men and women. To simplify exposition, only first marriage is treated, and the algorithm is illustrated by application to Zambia. In the second paper, remarriage and dissolution are included. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Effect of the Yield Stress and r-value Distribution on the Earing Profile of Cup Drawing with Yld2000-2d Yield Function

    NASA Astrophysics Data System (ADS)

    Lou, Yanshan; Bae, Gihyun; Lee, Changsoo; Huh, Hoon

    2010-06-01

    This paper deals with the effect of the yield stress and r-value distribution on the earing in the cup drawing. The anisotropic yield function, Yld2000-2d yield function, is selected to describe the anisotropy of two metal sheets, 719B and AA5182-O. The tool dimension is referred from the Benchmark problem of NUMISHEET'2002. The Downhill Simplex method is applied to identify the anisotropic coefficients in Yld2000-2d yield function. Simulations of the drawing process are performed to investigate the earing profile of two materials. The earing profiles obtained from simulations are compared with the analytical model developed by Hosford and Caddell. Simulations are conducted with respect to the change of the yield stress and r-value distribution, respectively. The correlation between the anisotropy and the earing tendency is investigated based on simulation data. Finally, the earing mechanism is analyzed through the deformation process of the blank during the cup deep drawing. It can be concluded that ears locate at angular positions with lower yield stress and higher r-value while the valleys appear at the angular position with higher yield stress and lower r-value. The effect of the yield stress distribution is more important for the cup height distribution than that of the r-value distribution.

  17. Transverse momentum dependent (TMD) parton distribution functions generated in the modified DGLAP formalism based on the valence-like distributions

    NASA Astrophysics Data System (ADS)

    Hosseinkhani, H.; Modarres, M.; Olanj, N.

    2017-07-01

    Transverse momentum dependent (TMD) parton distributions, also referred to as unintegrated parton distribution functions (UPDFs), are produced via the Kimber-Martin-Ryskin (KMR) prescription. The GJR08 set of parton distribution functions (PDFs) which are based on the valence-like distributions is used, at the leading order (LO) and the next-to-leading order (NLO) approximations, as inputs of the KMR formalism. The general and the relative behaviors of the generated TMD PDFs at LO and NLO and their ratios in a wide range of the transverse momentum values, i.e. kt2 = 10, 102, 104 and 108GeV2 are investigated. It is shown that the properties of the parent valence-like PDFs are imprinted on the daughter TMD PDFs. Imposing the angular ordering constraint (AOC) leads to the dynamical variable limits on the integrals which in turn increase the contributions from the lower scales at lower kt2. The results are compared with our previous studies based on the MSTW2008 input PDFs and it is shown that the present calculation gives flatter TMD PDFs. Finally, a comparison of longitudinal structure function (FL) is made by using the produced TMD PDFs and those that were generated through the MSTW2008-LO PDF from our previous work and the corresponding data from H1 and ZEUS collaborations and a reasonable agreement is found.

  18. Optimum structural design based on reliability and proof-load testing

    NASA Technical Reports Server (NTRS)

    Shinozuka, M.; Yang, J. N.

    1969-01-01

    Proof-load test eliminates structures with strength less than the proof load and improves the reliability value in analysis. It truncates the distribution function of strength at the proof load, thereby alleviating verification of a fitted distribution function at the lower tail portion where data are usually nonexistent.

  19. Tsallis Entropy and the Transition to Scaling in Fragmentation

    NASA Astrophysics Data System (ADS)

    Sotolongo-Costa, Oscar; Rodriguez, Arezky H.; Rodgers, G. J.

    2000-12-01

    By using the maximum entropy principle with Tsallis entropy we obtain a fragment size distribution function which undergoes a transition to scaling. This distribution function reduces to those obtained by other authors using Shannon entropy. The treatment is easily generalisable to any process of fractioning with suitable constraints.

  20. Excitation functions of parameters extracted from three-source (net-)proton rapidity distributions in Au-Au and Pb-Pb collisions over an energy range from AGS to RHIC

    NASA Astrophysics Data System (ADS)

    Gao, Li-Na; Liu, Fu-Hu; Sun, Yan; Sun, Zhu; Lacey, Roy A.

    2017-03-01

    Experimental results of the rapidity spectra of protons and net-protons (protons minus antiprotons) emitted in gold-gold (Au-Au) and lead-lead (Pb-Pb) collisions, measured by a few collaborations at the alternating gradient synchrotron (AGS), super proton synchrotron (SPS), and relativistic heavy ion collider (RHIC), are described by a three-source distribution. The values of the distribution width σC and fraction kC of the central rapidity region, and the distribution width σF and rapidity shift Δ y of the forward/backward rapidity regions, are then obtained. The excitation function of σC increases generally with increase of the center-of-mass energy per nucleon pair √{s_{NN}}. The excitation function of σF shows a saturation at √{s_{NN}}=8.8 GeV. The excitation function of kC shows a minimum at √{s_{NN}}=8.8 GeV and a saturation at √{s_{NN}} ≈ 17 GeV. The excitation function of Δ y increases linearly with ln(√{s_{NN}}) in the considered energy range.

  1. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.

  2. Implications of Atmospheric Test Fallout Data for Nuclear Winter.

    NASA Astrophysics Data System (ADS)

    Baker, George Harold, III

    1987-09-01

    Atmospheric test fallout data have been used to determine admissable dust particle size distributions for nuclear winter studies. The research was originally motivated by extreme differences noted in the magnitude and longevity of dust effects predicted by particle size distributions routinely used in fallout predictions versus those used for nuclear winter studies. Three different sets of historical data have been analyzed: (1) Stratospheric burden of Strontium -90 and Tungsten-185, 1954-1967 (92 contributing events); (2) Continental U.S. Strontium-90 fallout through 1958 (75 contributing events); (3) Local Fallout from selected Nevada tests (16 events). The contribution of dust to possible long term climate effects following a nuclear exchange depends strongly on the particle size distribution. The distribution affects both the atmospheric residence time and optical depth. One dimensional models of stratospheric/tropospheric fallout removal were developed and used to identify optimum particle distributions. Results indicate that particle distributions which properly predict bulk stratospheric activity transfer tend to be somewhat smaller than number size distributions used in initial nuclear winter studies. In addition, both ^{90}Sr and ^ {185}W fallout behavior is better predicted by the lognormal distribution function than the prevalent power law hybrid function. It is shown that the power law behavior of particle samples may well be an aberration of gravitational cloud stratification. Results support the possible existence of two independent particle size distributions in clouds generated by surface or near surface bursts. One distribution governs late time stratospheric fallout, the other governs early time fallout. A bimodal lognormal distribution is proposed to describe the cloud particle population. The distribution predicts higher initial sunlight attenuation and lower late time attenuation than the power law hybrid function used in initial nuclear winter studies.

  3. Inner Magnetospheric Superthermal Electron Transport: Photoelectron and Plasma Sheet Electron Sources

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Liemohn, M. W.; Kozyra, J. U.; Moore, T. E.

    1998-01-01

    Two time-dependent kinetic models of superthermal electron transport are combined to conduct global calculations of the nonthermal electron distribution function throughout the inner magnetosphere. It is shown that the energy range of validity for this combined model extends down to the superthermal-thermal intersection at a few eV, allowing for the calculation of the en- tire distribution function and thus an accurate heating rate to the thermal plasma. Because of the linearity of the formulas, the source terms are separated to calculate the distributions from the various populations, namely photoelectrons (PEs) and plasma sheet electrons (PSEs). These distributions are discussed in detail, examining the processes responsible for their formation in the various regions of the inner magnetosphere. It is shown that convection, corotation, and Coulomb collisions are the dominant processes in the formation of the PE distribution function and that PSEs are dominated by the interplay between the drift terms. Of note is that the PEs propagate around the nightside in a narrow channel at the edge of the plasmasphere as Coulomb collisions reduce the fluxes inside of this and convection compresses the flux tubes inward. These distributions are then recombined to show the development of the total superthermal electron distribution function in the inner magnetosphere and their influence on the thermal plasma. PEs usually dominate the dayside heating, with integral energy fluxes to the ionosphere reaching 10(exp 10) eV/sq cm/s in the plasmasphere, while heating from the PSEs typically does not exceed 10(exp 8) eV/sq cm/s. On the nightside, the inner plasmasphere is usually unheated by superthermal electrons. A feature of these combined spectra is that the distribution often has upward slopes with energy, particularly at the crossover from PE to PSE dominance, indicating that instabilities are possible.

  4. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  5. Wigner functions for nonclassical states of a collection of two-level atoms

    NASA Technical Reports Server (NTRS)

    Agarwal, G. S.; Dowling, Jonathan P.; Schleich, Wolfgang P.

    1993-01-01

    The general theory of atomic angular momentum states is used to derive the Wigner distribution function for atomic angular momentum number states, coherent states, and squeezed states. These Wigner functions W(theta,phi) are represented as a pseudo-probability distribution in spherical coordinates theta and phi on the surface of a sphere of radius the square root of j(j +1) where j is the total angular momentum.

  6. Soliton sustainable socio-economic distribution

    NASA Astrophysics Data System (ADS)

    Dresvyannikov, M. A.; Petrova, M. V.; Tshovrebov, A. M.

    2017-11-01

    In the work presented, from close positions, we consider: 1) the question of the stability of socio-economic distributions; 2) the question of the possible mechanism for the formation of fractional power-law dependences in the Cobb/Douglas production function; 3) the introduction of a fractional order derivative for a general analysis of a fractional power function; 4) bringing in a state of mutual matching of the interest rate and the production function of Cobb/Douglas.

  7. The Generation, Radiation and Prediction of Supersonic Jet Noise. Volume 1

    DTIC Science & Technology

    1978-10-01

    standard, Gaussian correlation function model can yield a good noise spectrum prediction (at 900), but the corresponding axial source distributions do not...forms for the turbulence cross-correlation function. Good agreement was obtained between measured and calculated far- field noise spectra. However, the...complementary error function profile (3.63) was found to provide a good fit to the axial velocity distribution tor a wide range of Mach numbers in the Initial

  8. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  9. Single-diffractive production of dijets within the kt-factorization approach

    NASA Astrophysics Data System (ADS)

    Łuszczak, Marta; Maciuła, Rafał; Szczurek, Antoni; Babiarz, Izabela

    2017-09-01

    We discuss single-diffractive production of dijets. The cross section is calculated within the resolved Pomeron picture, for the first time in the kt-factorization approach, neglecting transverse momentum of the Pomeron. We use Kimber-Martin-Ryskin unintegrated parton (gluon, quark, antiquark) distributions in both the proton as well as in the Pomeron or subleading Reggeon. The unintegrated parton distributions are calculated based on conventional mmht2014nlo parton distribution functions in the proton and H1 Collaboration diffractive parton distribution functions used previously in the analysis of diffractive structure function and dijets at HERA. For comparison, we present results of calculations performed within the collinear-factorization approach. Our results remain those obtained in the next-to-leading-order approach. The calculation is (must be) supplemented by the so-called gap survival factor, which may, in general, depend on kinematical variables. We try to describe the existing data from Tevatron and make detailed predictions for possible LHC measurements. Several differential distributions are calculated. The E¯T, η ¯ and xp ¯ distributions are compared with the Tevatron data. A reasonable agreement is obtained for the first two distributions. The last one requires introducing a gap survival factor which depends on kinematical variables. We discuss how the phenomenological dependence on one kinematical variable may influence dependence on other variables such as E¯T and η ¯. Several distributions for the LHC are shown.

  10. Calculated spanwise lift distributions, influence functions, and influence coefficients for unswept wings in subsonic flow

    NASA Technical Reports Server (NTRS)

    Diederich, Franklin W; Zlotnick, Martin

    1955-01-01

    Spanwise lift distributions have been calculated for nineteen unswept wings with various aspect ratios and taper ratios and with a variety of angle-of-attack or twist distributions, including flap and aileron deflections, by means of the Weissinger method with eight control points on the semispan. Also calculated were aerodynamic influence coefficients which pertain to a certain definite set of stations along the span, and several methods are presented for calculating aerodynamic influence functions and coefficients for stations other than those stipulated. The information presented in this report can be used in the analysis of untwisted wings or wings with known twist distributions, as well as in aeroelastic calculations involving initially unknown twist distributions.

  11. Results of the Verification of the Statistical Distribution Model of Microseismicity Emission Characteristics

    NASA Astrophysics Data System (ADS)

    Cianciara, Aleksander

    2016-09-01

    The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.

  12. Three-dimensional analytical model for the spatial variation of the foreshock electron distribution function - Systematics and comparisons with ISEE observations

    NASA Technical Reports Server (NTRS)

    Fitzenreiter, R. J.; Scudder, J. D.; Klimas, A. J.

    1990-01-01

    A model which is consistent with the solar wind and shock surface boundary conditions for the foreshock electron distribution in the absence of wave-particle effects is formulated for an arbitrary location behind the magnetic tangent to the earth's bow shock. Variations of the gyrophase-averaged velocity distribution are compared and contrasted with in situ ISEE observations. It is found that magnetic mirroring of solar wind electrons is the most important process by which nonmonotonic reduced electron distributions in the foreshock are produced. Leakage of particles from the magnetosheath is shown to be relatively unimportant in determining reduced distributions that are nonmonotonic. The two-dimensional distribution function off the magnetic field direction is the crucial contribution in producing reduced distributions which have beams. The time scale for modification of the electron velocity distribution in velocity space can be significantly influenced by steady state spatial gradients in the background imposed by the curved shock geometry.

  13. A generalized statistical model for the size distribution of wealth

    NASA Astrophysics Data System (ADS)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  14. Product of Ginibre matrices: Fuss-Catalan and Raney distributions

    NASA Astrophysics Data System (ADS)

    Penson, Karol A.; Życzkowski, Karol

    2011-06-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions Ps(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions Ps(x) in terms of a combination of s hypergeometric functions of the type sFs-1. The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.

  15. Product of Ginibre matrices: Fuss-Catalan and Raney distributions.

    PubMed

    Penson, Karol A; Zyczkowski, Karol

    2011-06-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions P(s)(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions P(s)(x) in terms of a combination of s hypergeometric functions of the type (s)F(s-1). The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.

  16. Universal noise and Efimov physics

    NASA Astrophysics Data System (ADS)

    Nicholson, Amy N.

    2016-03-01

    Probability distributions for correlation functions of particles interacting via random-valued fields are discussed as a novel tool for determining the spectrum of a theory. In particular, this method is used to determine the energies of universal N-body clusters tied to Efimov trimers, for even N, by investigating the distribution of a correlation function of two particles at unitarity. Using numerical evidence that this distribution is log-normal, an analytical prediction for the N-dependence of the N-body binding energies is made.

  17. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  18. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel II. Distribution functions and moments.

    PubMed

    Langenbucher, Frieder

    2003-01-01

    MS Excel is a useful tool to handle in vitro/in vivo correlation (IVIVC) distribution functions, with emphasis on the Weibull and the biexponential distribution, which are most useful for the presentation of cumulative profiles, e.g. release in vitro or urinary excretion in vivo, and differential profiles such as the plasma response in vivo. The discussion includes moments (AUC and mean) as summarizing statistics, and data-fitting algorithms for parameter estimation.

  19. Time-Frequency Distribution Analyses of Ku-Band Radar Doppler Echo Signals

    NASA Astrophysics Data System (ADS)

    Bujaković, Dimitrije; Andrić, Milenko; Bondžulić, Boban; Mitrović, Srđan; Simić, Slobodan

    2015-03-01

    Real radar echo signals of a pedestrian, vehicle and group of helicopters are analyzed in order to maximize signal energy around central Doppler frequency in time-frequency plane. An optimization, preserving this concentration, is suggested based on three well-known concentration measures. Various window functions and time-frequency distributions were optimization inputs. Conducted experiments on an analytic and three real signals have shown that energy concentration significantly depends on used time-frequency distribution and window function, for all three used criteria.

  20. Graded-index fibers, Wigner-distribution functions, and the fractional Fourier transform.

    PubMed

    Mendlovic, D; Ozaktas, H M; Lohmann, A W

    1994-09-10

    Two definitions of a fractional Fourier transform have been proposed previously. One is based on the propagation of a wave field through a graded-index medium, and the other is based on rotating a function's Wigner distribution. It is shown that both definitions are equivalent. An important result of this equivalency is that the Wigner distribution of a wave field rotates as the wave field propagates through a quadratic graded-index medium. The relation with ray-optics phase space is discussed.

Top