Sample records for probability mass functions

  1. Probability mass first flush evaluation for combined sewer discharges.

    PubMed

    Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong

    2010-01-01

    The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.

  2. Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe

    NASA Technical Reports Server (NTRS)

    Isaacson, Jeffrey A.; Canizares, Claude R.

    1989-01-01

    Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.

  3. Gravitational lensing, time delay, and gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Mao, Shude

    1992-01-01

    The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.

  4. Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2009-04-17

    We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaltonen, T.; Brucken, E.; Devoto, F.

    We search for resonant production of tt pairs in 4.8 fb{sup -1} integrated luminosity of pp collision data at {radical}(s)=1.96 TeV in the lepton+jets decay channel, where one top quark decays leptonically and the other hadronically. A matrix-element reconstruction technique is used; for each event a probability density function of the tt candidate invariant mass is sampled. These probability density functions are used to construct a likelihood function, whereby the cross section for resonant tt production is estimated, given a hypothetical resonance mass and width. The data indicate no evidence of resonant production of tt pairs. A benchmark model ofmore » leptophobic Z{sup '}{yields}tt is excluded with m{sub Z}{sup '}<900 GeV/c{sup 2} at 95% confidence level.« less

  6. A Dual Power Law Distribution for the Stellar Initial Mass Function

    NASA Astrophysics Data System (ADS)

    Hoffmann, Karl Heinz; Essex, Christopher; Basu, Shantanu; Prehl, Janett

    2018-05-01

    We introduce a new dual power law (DPL) probability distribution function for the mass distribution of stellar and substellar objects at birth, otherwise known as the initial mass function (IMF). The model contains both deterministic and stochastic elements, and provides a unified framework within which to view the formation of brown dwarfs and stars resulting from an accretion process that starts from extremely low mass seeds. It does not depend upon a top down scenario of collapsing (Jeans) masses or an initial lognormal or otherwise IMF-like distribution of seed masses. Like the modified lognormal power law (MLP) distribution, the DPL distribution has a power law at the high mass end, as a result of exponential growth of mass coupled with equally likely stopping of accretion at any time interval. Unlike the MLP, a power law decay also appears at the low mass end of the IMF. This feature is closely connected to the accretion stopping probability rising from an initially low value up to a high value. This might be associated with physical effects of ejections sometimes (i.e., rarely) stopping accretion at early times followed by outflow driven accretion stopping at later times, with the transition happening at a critical time (therefore mass). Comparing the DPL to empirical data, the critical mass is close to the substellar mass limit, suggesting that the onset of nuclear fusion plays an important role in the subsequent accretion history of a young stellar object.

  7. r.randomwalk v1.0, a multi-functional conceptual tool for mass movement routing

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Krenn, J.; Chu, H.-J.

    2015-09-01

    We introduce r.randomwalk, a flexible and multi-functional open source tool for backward- and forward-analyses of mass movement propagation. r.randomwalk builds on GRASS GIS, the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are: (i) multiple break criteria can be combined to compute an impact indicator score, (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter settings, resulting in an impact indicator index in the range 0-1, (iii) built-in functions for validation and visualization of the results are provided, (iv) observed landslides can be back-analyzed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk (i) for a single event, the Acheron Rock Avalanche in New Zealand, (ii) for landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) for lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.

  8. r.randomwalk v1, a multi-functional conceptual tool for mass movement routing

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Krenn, J.; Chu, H.-J.

    2015-12-01

    We introduce r.randomwalk, a flexible and multi-functional open-source tool for backward and forward analyses of mass movement propagation. r.randomwalk builds on GRASS GIS (Geographic Resources Analysis Support System - Geographic Information System), the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are (i) multiple break criteria can be combined to compute an impact indicator score; (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter sets, resulting in an impact indicator index in the range 0-1; (iii) built-in functions for validation and visualization of the results are provided; (iv) observed landslides can be back analysed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk for (i) a single event, the Acheron rock avalanche in New Zealand; (ii) landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.

  9. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  10. Knock probability estimation through an in-cylinder temperature model with exogenous noise

    NASA Astrophysics Data System (ADS)

    Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.

    2018-01-01

    This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.

  11. Deep luminosity function of the globular cluster M13

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drukier, G.A.; Fahlman, G.G.; Richter, H.B.

    The luminosity function in a field of M13 at 14 core radii has been observed to M(V) = +12.0, and new theoretical, low-mass, stellar models appropriate to M13 are used to convert the function to a mass function which extends to M = 0.18 solar, within a factor of two of brown dwarf masses at this metal abundance. As the number of stars observed in each magnitude bin is still increasing at the limit of the data, the presence of stars with masses lower than 0.18 solar is probable. This result sets an upper limit of 0.18 solar mass formore » low-mass cutoffs in dynamical models of M13. No single power law mass function fits all the observations. The trend of the data supports the idea of a steep increase in the slope of the mass function for M less than 0.4 solar. The results imply that the total mass in low-mass stars in M13, and by implication elsewhere, is higher than was previously thought. 26 references.« less

  12. Neutrino mass priors for cosmology from random matrices

    NASA Astrophysics Data System (ADS)

    Long, Andrew J.; Raveri, Marco; Hu, Wayne; Dodelson, Scott

    2018-02-01

    Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σ mν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π (Σ mν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix Mν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution over Mν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σ mν that we interpret as a Bayesian prior probability π (Σ mν). Assuming a basis-invariant probability distribution on Mν, also known as the anarchy hypothesis, we find that π (Σ mν) peaks close to the smallest Σ mν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π (Σ mν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. We present fitting functions for π (Σ mν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.

  13. Center-of-Mass Tomography and Wigner Function for Multimode Photon States

    NASA Astrophysics Data System (ADS)

    Dudinets, Ivan V.; Man'ko, Vladimir I.

    2018-06-01

    Tomographic probability representation of multimode electromagnetic field states in the scheme of center-of-mass tomography is reviewed. Both connection of the field state Wigner function and observable Weyl symbols with the center-of-mass tomograms as well as connection of the Grönewold kernel with the center-of-mass tomographic kernel determining the noncommutative product of the tomograms are obtained. The dual center-of-mass tomogram of the photon states are constructed and the dual tomographic kernel is obtained. The models of other generalized center-of-mass tomographies are discussed. Example of two-mode even and odd Schrödinger cat states is presented in details.

  14. Design and simulation of stratified probability digital receiver with application to the multipath communication

    NASA Technical Reports Server (NTRS)

    Deal, J. H.

    1975-01-01

    One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.

  15. Detecting Anomalies in Process Control Networks

    NASA Astrophysics Data System (ADS)

    Rrushi, Julian; Kang, Kyoung-Don

    This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.

  16. Precise measurement of the top-quark mass in the lepton+jets topology at CDF II.

    PubMed

    Aaltonen, T; Abulencia, A; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carrillo, S; Carlsmith, D; Carosi, R; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Cilijak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; DaRonco, S; Datta, M; D'Auria, S; Davies, T; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Delli Paoli, F; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Dörr, C; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garberson, F; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D; Giagu, S; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Group, R C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraan, A C; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marginean, R; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyamoto, A; Moed, S; Moggi, N; Mohr, B; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuno, S; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vazquez, F; Velev, G; Vellidis, C; Veramendi, G; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Vollrath, I; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, J; Wagner, W; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zhou, J; Zucchelli, S

    2007-11-02

    We present a measurement of the mass of the top quark from proton-antiproton collisions recorded at the CDF experiment in Run II of the Fermilab Tevatron. We analyze events from the single lepton plus jets final state (tt-->W(+)bW(-)b-->lnubqq'b). The top-quark mass is extracted using a direct calculation of the probability density that each event corresponds to the tt final state. The probability is a function of both the mass of the top quark and the energy scale of the calorimeter jets, which is constrained in situ by the hadronic W boson mass. Using 167 events observed in 955 pb(-1) of integrated luminosity, we achieve the single most precise measurement of the top-quark mass, 170.8+/-2.2(stat.)+/-1.4(syst.) GeV/c(2).

  17. Environmental dependence of the galaxy stellar mass function in the Dark Energy Survey Science Verification Data

    DOE PAGES

    Etherington, J.; Thomas, D.; Maraston, C.; ...

    2016-01-04

    Measurements of the galaxy stellar mass function are crucial to understand the formation of galaxies in the Universe. In a hierarchical clustering paradigm it is plausible that there is a connection between the properties of galaxies and their environments. Evidence for environmental trends has been established in the local Universe. The Dark Energy Survey (DES) provides large photometric datasets that enable further investigation of the assembly of mass. In this study we use ~3.2 million galaxies from the (South Pole Telescope) SPT-East field in the DES science verification (SV) dataset. From grizY photometry we derive galaxy stellar masses and absolutemore » magnitudes, and determine the errors on these properties using Monte-Carlo simulations using the full photometric redshift probability distributions. We compute galaxy environments using a fixed conical aperture for a range of scales. We construct galaxy environment probability distribution functions and investigate the dependence of the environment errors on the aperture parameters. We compute the environment components of the galaxy stellar mass function for the redshift range 0.15 < z < 1.05. For z < 0.75 we find that the fraction of massive galaxies is larger in high density environment than in low density environments. We show that the low density and high density components converge with increasing redshift up to z ~ 1.0 where the shapes of the mass function components are indistinguishable. As a result, our study shows how high density structures build up around massive galaxies through cosmic time.« less

  18. Neutrino mass priors for cosmology from random matrices

    DOE PAGES

    Long, Andrew J.; Raveri, Marco; Hu, Wayne; ...

    2018-02-13

    Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σm ν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π(Σm ν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix M ν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution overmore » M ν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σm ν that we interpret as a Bayesian prior probability π(Σm ν). Assuming a basis-invariant probability distribution on M ν, also known as the anarchy hypothesis, we find that π(Σm ν) peaks close to the smallest Σm ν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π(Σm ν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. In conclusion, we present fitting functions for π(Σm ν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.« less

  19. Neutrino mass priors for cosmology from random matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Andrew J.; Raveri, Marco; Hu, Wayne

    Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σm ν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π(Σm ν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix M ν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution overmore » M ν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σm ν that we interpret as a Bayesian prior probability π(Σm ν). Assuming a basis-invariant probability distribution on M ν, also known as the anarchy hypothesis, we find that π(Σm ν) peaks close to the smallest Σm ν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π(Σm ν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. In conclusion, we present fitting functions for π(Σm ν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.« less

  20. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  1. Community stability and selective extinction during the Permian-Triassic mass extinction

    NASA Astrophysics Data System (ADS)

    Roopnarine, Peter D.; Angielczyk, Kenneth D.

    2015-10-01

    The fossil record contains exemplars of extreme biodiversity crises. Here, we examined the stability of terrestrial paleocommunities from South Africa during Earth's most severe mass extinction, the Permian-Triassic. We show that stability depended critically on functional diversity and patterns of guild interaction, regardless of species richness. Paleocommunities exhibited less transient instability—relative to model communities with alternative community organization—and significantly greater probabilities of being locally stable during the mass extinction. Functional patterns that have evolved during an ecosystem's history support significantly more stable communities than hypothetical alternatives.

  2. Scaling the Poisson Distribution

    ERIC Educational Resources Information Center

    Farnsworth, David L.

    2014-01-01

    We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.

  3. A stochastic model for the probability of malaria extinction by mass drug administration.

    PubMed

    Pemberton-Ross, Peter; Chitnis, Nakul; Pothin, Emilie; Smith, Thomas A

    2017-09-18

    Mass drug administration (MDA) has been proposed as an intervention to achieve local extinction of malaria. Although its effect on the reproduction number is short lived, extinction may subsequently occur in a small population due to stochastic fluctuations. This paper examines how the probability of stochastic extinction depends on population size, MDA coverage and the reproduction number under control, R c . A simple compartmental model is developed which is used to compute the probability of extinction using probability generating functions. The expected time to extinction in small populations after MDA for various scenarios in this model is calculated analytically. The results indicate that mass drug administration (Firstly, R c must be sustained at R c  < 1.2 to avoid the rapid re-establishment of infections in the population. Secondly, the MDA must produce effective cure rates of >95% to have a non-negligible probability of successful elimination. Stochastic fluctuations only significantly affect the probability of extinction in populations of about 1000 individuals or less. The expected time to extinction via stochastic fluctuation is less than 10 years only in populations less than about 150 individuals. Clustering of secondary infections and of MDA distribution both contribute positively to the potential probability of success, indicating that MDA would most effectively be administered at the household level. There are very limited circumstances in which MDA will lead to local malaria elimination with a substantial probability.

  4. The Most Massive Galaxies and Black Holes Allowed by ΛCDM

    NASA Astrophysics Data System (ADS)

    Behroozi, Peter; Silk, Joseph

    2018-04-01

    Given a galaxy's stellar mass, its host halo mass has a lower limit from the cosmic baryon fraction and known baryonic physics. At z > 4, galaxy stellar mass functions place lower limits on halo number densities that approach expected ΛCDM halo mass functions. High-redshift galaxy stellar mass functions can thus place interesting limits on number densities of massive haloes, which are otherwise very difficult to measure. Although halo mass functions at z < 8 are consistent with observed galaxy stellar masses if galaxy baryonic conversion efficiencies increase with redshift, JWST and WFIRST will more than double the redshift range over which useful constraints are available. We calculate maximum galaxy stellar masses as a function of redshift given expected halo number densities from ΛCDM. We apply similar arguments to black holes. If their virial mass estimates are accurate, number density constraints alone suggest that the quasars SDSS J1044-0125 and SDSS J010013.02+280225.8 likely have black hole mass — stellar mass ratios higher than the median z = 0 relation, confirming the expectation from Lauer bias. Finally, we present a public code to evaluate the probability of an apparently ΛCDM-inconsistent high-mass halo being detected given the combined effects of multiple surveys and observational errors.

  5. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  6. Halo correlations in nonlinear cosmic density fields

    NASA Astrophysics Data System (ADS)

    Bernardeau, F.; Schaeffer, R.

    1999-09-01

    The question we address in this paper is the determination of the correlation properties of the dark matter halos appearing in cosmic density fields once they underwent a strongly nonlinear evolution induced by gravitational dynamics. A series of previous works have given indications that kind of non-Gaussian features are induced by nonlinear evolution in term of the high-order correlation functions. Assuming such patterns for the matter field, i.e. that the high-order correlation functions behave as products of two-body correlation functions, we derive the correlation properties of the halos, that are assumed to represent the correlation properties of galaxies or clusters. The hierarchical pattern originally induced by gravity is shown to be conserved for the halos. The strength of their correlations at any order varies, however, but is found to depend only on their internal properties, namely on the parameter x~ m/r(3-gamma ) where m is the mass of the halo, r its size and gamma is the power law index of the two-body correlation function. This internal parameter is seen to be close to the depth of the internal potential well of virialized objects. We were able to derive the explicit form of the generating function of the moments of the halo counts probability distribution function. In particular we show explicitly that, generically, S_P(x)-> P(P-2) in the rare halo limit. Various illustrations of our general results are presented. As a function of the properties of the underlying matter field, we construct the count probabilities for halos and in particular discuss the halo void probability. We evaluate the dependence of the halo mass function on the environment: within clusters, hierarchical clustering implies the higher masses are favored. These properties solely arise from what is a natural bias (ie, naturally induced by gravity) between the observed objects and the unseen matter field, and how it manifests itself depending on which selection effects are imposed.

  7. On the probability distribution function of the mass surface density of molecular clouds. I

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-05-01

    The probability distribution function (PDF) of the mass surface density is an essential characteristic of the structure of molecular clouds or the interstellar medium in general. Observations of the PDF of molecular clouds indicate a composition of a broad distribution around the maximum and a decreasing tail at high mass surface densities. The first component is attributed to the random distribution of gas which is modeled using a log-normal function while the second component is attributed to condensed structures modeled using a simple power-law. The aim of this paper is to provide an analytical model of the PDF of condensed structures which can be used by observers to extract information about the condensations. The condensed structures are considered to be either spheres or cylinders with a truncated radial density profile at cloud radius rcl. The assumed profile is of the form ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 for arbitrary power n where ρc and r0 are the central density and the inner radius, respectively. An implicit function is obtained which either truncates (sphere) or has a pole (cylinder) at maximal mass surface density. The PDF of spherical condensations and the asymptotic PDF of cylinders in the limit of infinite overdensity ρc/ρ(rcl) flattens for steeper density profiles and has a power law asymptote at low and high mass surface densities and a well defined maximum. The power index of the asymptote Σ- γ of the logarithmic PDF (ΣP(Σ)) in the limit of high mass surface densities is given by γ = (n + 1)/(n - 1) - 1 (spheres) or by γ = n/ (n - 1) - 1 (cylinders in the limit of infinite overdensity). Appendices are available in electronic form at http://www.aanda.org

  8. Beta-decay rate and beta-delayed neutron emission probability of improved gross theory

    NASA Astrophysics Data System (ADS)

    Koura, Hiroyuki

    2014-09-01

    A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for unmeasured nuclei are adopted from the KTUY nuclear mass formula, which is based on the spherical-basis method. Considering the properties of the integrated Fermi function, we can roughly categorized energy region of excited-state of a daughter nucleus into three regions: a highly-excited energy region, which fully affect a delayed neutron probability, a middle energy region, which is estimated to contribute the decay heat, and a region neighboring the ground-state, which determines the beta-decay rate. Some results will be given in the presentation. A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for unmeasured nuclei are adopted from the KTUY nuclear mass formula, which is based on the spherical-basis method. Considering the properties of the integrated Fermi function, we can roughly categorized energy region of excited-state of a daughter nucleus into three regions: a highly-excited energy region, which fully affect a delayed neutron probability, a middle energy region, which is estimated to contribute the decay heat, and a region neighboring the ground-state, which determines the beta-decay rate. Some results will be given in the presentation. This work is a result of Comprehensive study of delayed-neutron yields for accurate evaluation of kinetics of high-burn up reactors entrusted to Tokyo Institute of Technology by the Ministry of Education, Culture, Sports, Science and Technology of Japan.

  9. The most massive galaxies and black holes allowed by ΛCDM

    NASA Astrophysics Data System (ADS)

    Behroozi, Peter; Silk, Joseph

    2018-07-01

    Given a galaxy's stellar mass, its host halo mass has a lower limit from the cosmic baryon fraction and known baryonic physics. At z> 4, galaxy stellar mass functions place lower limits on halo number densities that approach expected Lambda Cold Dark Matter halo mass functions. High-redshift galaxy stellar mass functions can thus place interesting limits on number densities of massive haloes, which are otherwise very difficult to measure. Although halo mass functions at z < 8 are consistent with observed galaxy stellar masses if galaxy baryonic conversion efficiencies increase with redshift, JWST(James Webb Space Telescope) and WFIRST(Wide-Field InfraRed Survey Telescope) will more than double the redshift range over which useful constraints are available. We calculate maximum galaxy stellar masses as a function of redshift given expected halo number densities from ΛCDM. We apply similar arguments to black holes. If their virial mass estimates are accurate, number density constraints alone suggest that the quasars SDSS J1044-0125 and SDSS J010013.02+280225.8 likely have black hole mass to stellar mass ratios higher than the median z = 0 relation, confirming the expectation from Lauer bias. Finally, we present a public code to evaluate the probability of an apparently ΛCDM-inconsistent high-mass halo being detected given the combined effects of multiple surveys and observational errors.

  10. Ozone-surface interactions: Investigations of mechanisms, kinetics, mass transport, and implications for indoor air quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrison, Glenn Charles

    1999-12-01

    In this dissertation, results are presented of laboratory investigations and mathematical modeling efforts designed to better understand the interactions of ozone with surfaces. In the laboratory, carpet and duct materials were exposed to ozone and measured ozone uptake kinetics and the ozone induced emissions of volatile organic compounds. To understand the results of the experiments, mathematical methods were developed to describe dynamic indoor aldehyde concentrations, mass transport of reactive species to smooth surfaces, the equivalent reaction probability of whole carpet due to the surface reactivity of fibers and carpet backing, and ozone aging of surfaces. Carpets, separated carpet fibers, andmore » separated carpet backing all tended to release aldehydes when exposed to ozone. Secondary emissions were mostly n-nonanal and several other smaller aldehydes. The pattern of emissions suggested that vegetable oils may be precursors for these oxidized emissions. Several possible precursors and experiments in which linseed and tung oils were tested for their secondary emission potential were discussed. Dynamic emission rates of 2-nonenal from a residential carpet may indicate that intermediate species in the oxidation of conjugated olefins can significantly delay aldehyde emissions and act as reservoir for these compounds. The ozone induced emission rate of 2-nonenal, a very odorous compound, can result in odorous indoor concentrations for several years. Surface ozone reactivity is a key parameter in determining the flux of ozone to a surface, is parameterized by the reaction probability, which is simply the probability that an ozone molecule will be irreversibly consumed when it strikes a surface. In laboratory studies of two residential and two commercial carpets, the ozone reaction probability for carpet fibers, carpet backing and the equivalent reaction probability for whole carpet were determined. Typically reaction probability values for these materials were 10 -7, 10 -5, and 10 -5 respectively. To understand how internal surface area influences the equivalent reaction probability of whole carpet, a model of ozone diffusion into and reaction with internal carpet components was developed. This was then used to predict apparent reaction probabilities for carpet. He combines this with a modified model of turbulent mass transfer developed by Liu, et al. to predict deposition rates and indoor ozone concentrations. The model predicts that carpet should have an equivalent reaction probability of about 10 -5, matching laboratory measurements of the reaction probability. For both carpet and duct materials, surfaces become progressively quenched (aging), losing the ability to react or otherwise take up ozone. He evaluated the functional form of aging and find that the reaction probability follows a power function with respect to the cumulative uptake of ozone. To understand ozone aging of surfaces, he developed several mathematical descriptions of aging based on two different mechanisms. The observed functional form of aging is mimicked by a model which describes ozone diffusion with internal reaction in a solid. He shows that the fleecy nature of carpet materials in combination with the model of ozone diffusion below a fiber surface and internal reaction may explain the functional form and the magnitude of power function parameters observed due to ozone interactions with carpet. The ozone induced aldehyde emissions, measured from duct materials, were combined with an indoor air quality model to show that concentrations of aldehydes indoors may approach odorous levels. He shows that ducts are unlikely to be a significant sink for ozone due to the low reaction probability in combination with the short residence time of air in ducts.« less

  11. Stochastic modeling of soil salinity

    NASA Astrophysics Data System (ADS)

    Suweis, S.; Porporato, A. M.; Daly, E.; van der Zee, S.; Maritan, A.; Rinaldo, A.

    2010-12-01

    A minimalist stochastic model of primary soil salinity is proposed, in which the rate of soil salinization is determined by the balance between dry and wet salt deposition and the intermittent leaching events caused by rainfall events. The equations for the probability density functions of salt mass and concentration are found by reducing the coupled soil moisture and salt mass balance equations to a single stochastic differential equation (generalized Langevin equation) driven by multiplicative Poisson noise. Generalized Langevin equations with multiplicative white Poisson noise pose the usual Ito (I) or Stratonovich (S) prescription dilemma. Different interpretations lead to different results and then choosing between the I and S prescriptions is crucial to describe correctly the dynamics of the model systems. We show how this choice can be determined by physical information about the timescales involved in the process. We also show that when the multiplicative noise is at most linear in the random variable one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We then apply these results to the generalized Langevin equation that drives the salt mass dynamics. The stationary analytical solutions for the probability density functions of salt mass and concentration provide insight on the interplay of the main soil, plant and climate parameters responsible for long term soil salinization. In particular, they show the existence of two distinct regimes, one where the mean salt mass remains nearly constant (or decreases) with increasing rainfall frequency, and another where mean salt content increases markedly with increasing rainfall frequency. As a result, relatively small reductions of rainfall in drier climates may entail dramatic shifts in longterm soil salinization trends, with significant consequences, e.g. for climate change impacts on rain fed agriculture.

  12. The rates and time-delay distribution of multiply imaged supernovae behind lensing clusters

    NASA Astrophysics Data System (ADS)

    Li, Xue; Hjorth, Jens; Richard, Johan

    2012-11-01

    Time delays of gravitationally lensed sources can be used to constrain the mass model of a deflector and determine cosmological parameters. We here present an analysis of the time-delay distribution of multiply imaged sources behind 17 strong lensing galaxy clusters with well-calibrated mass models. We find that for time delays less than 1000 days, at z = 3.0, their logarithmic probability distribution functions are well represented by P(log Δt) = 5.3 × 10-4Δttilde beta/M2502tilde beta, with tilde beta = 0.77, where M250 is the projected cluster mass inside 250 kpc (in 1014M⊙), and tilde beta is the power-law slope of the distribution. The resultant probability distribution function enables us to estimate the time-delay distribution in a lensing cluster of known mass. For a cluster with M250 = 2 × 1014M⊙, the fraction of time delays less than 1000 days is approximately 3%. Taking Abell 1689 as an example, its dark halo and brightest galaxies, with central velocity dispersions σ>=500kms-1, mainly produce large time delays, while galaxy-scale mass clumps are responsible for generating smaller time delays. We estimate the probability of observing multiple images of a supernova in the known images of Abell 1689. A two-component model of estimating the supernova rate is applied in this work. For a magnitude threshold of mAB = 26.5, the yearly rate of Type Ia (core-collapse) supernovae with time delays less than 1000 days is 0.004±0.002 (0.029±0.001). If the magnitude threshold is lowered to mAB ~ 27.0, the rate of core-collapse supernovae suitable for time delay observation is 0.044±0.015 per year.

  13. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  14. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  15. Membership and Dynamical Parameters of the Open Cluster NGC 1039

    NASA Astrophysics Data System (ADS)

    Wang, Jiaxin; Ma, Jun; Wu, Zhenyu; Zhou, Xu

    2017-11-01

    In this paper, we analyze the open cluster NGC 1039. This young open cluster is observed as a part of Beijing-Arizona-Taiwan-Connecticut Multicolor Sky Survey. Combining our observations with the Sloan Digital Sky Survey photometric data, we employ the Padova stellar model and the zero-age main-sequence curve to the data to derive a reddening, E(B-V)=0.10+/- 0.02, and a distance modulus, {(m-M)}0=8.4+/- 0.2, for NGC 1039. The photometric membership probabilities of stars in the region of NGC 1039 are derived using the spectral energy distribution-fitting method. According to the membership probabilities ({P}{SED}) obtained here, 582 stars are cluster members with {P}{SED} larger than 60%. In addition, we determine the structural parameters of NGC 1039 by fitting its radial density profile with the King model. These parameters are a core radius, {R}{{c}}=4.44+/- 1.31 {pc}; a tidal radius, {R}{{t}}=13.57+/- 4.85 {pc}; and a concentration parameter of {C}0={log}({R}{{t}}/{R}{{c}})=0.49+/- 0.20. We also fit the observed mass function of NGC 1039 with masses from 0.3 {M}⊙ to 1.65 {M}⊙ with a power-law function {{Φ }}(m)\\propto {m}α to derive its slopes of mass functions of different spatial regions. The results obtained here show that the slope of the mass function of NGC 1039 is flatter in the central regions (α = 0.117), becomes steeper at larger radii (α = -2.878), and breaks at {m}{break}≈ 0.80 {M}⊙ . In particular, for the first time, our results show that the mass segregation appears in NGC 1039.

  16. StimuFrac Compressibility as a Function of CO2 Molar Fraction

    DOE Data Explorer

    Carlos A. Fernandez

    2016-04-29

    Compressibility values were obtained in a range of pressures at 250degC by employing a fixed volume view cell completely filled with PAA aqueous solution and injecting CO2 at constant flow rate (0.3mL/min). Pressure increase as a function of supercritical CO2 (scCO2) mass fraction in the mixture was monitored. The plot shows the apparent compressibility of Stimufrac as a function of scCO2 mass fraction obtained in a pressure range between 2100-7000 psi at 250degC. At small mass fractions of scCO2 the compressibility increases probably due to the dissolution/reaction of CO2 in aqueous PAA and reaches a maximum at mCO2/mH2O = 0.06. Then, compressibility decreases showing a linear relationship with scCO2 mass fraction due to the continuous increase in density of the binary fluid associated to the pressure increase.

  17. Evaporation of planetary atmospheres due to XUV illumination by quasars

    NASA Astrophysics Data System (ADS)

    Forbes, John C.; Loeb, Abraham

    2018-06-01

    Planetary atmospheres are subject to mass loss through a variety of mechanisms including irradiation by XUV photons from their host star. Here we explore the consequences of XUV irradiation by supermassive black holes as they grow by the accretion of gas in galactic nuclei. Based on the mass distribution of stars in galactic bulges and disks and the luminosity history of individual black holes, we estimate the probability distribution function of XUV fluences as a function of galaxy halo mass, redshift, and stellar component. We find that about 50% of all planets in the universe may lose a mass of hydrogen of ˜2.5 × 1019 g (the total mass of the Martian atmosphere), 10% may lose ˜5.1 × 1021 g (the total mass of Earth's atmosphere), and 0.2% may lose ˜1.4 × 1024 g (the total mass of Earth's oceans). The fractions are appreciably higher in the spheroidal components of galaxies, and depend strongly on galaxy mass, but only weakly on redshift.

  18. Opacity probability distribution functions for electronic systems of CN and C2 molecules including their stellar isotopic forms.

    NASA Technical Reports Server (NTRS)

    Querci, F.; Kunde, V. G.; Querci, M.

    1971-01-01

    The basis and techniques are presented for generating opacity probability distribution functions for the CN molecule (red and violet systems) and the C2 molecule (Swan, Phillips, Ballik-Ramsay systems), two of the more important diatomic molecules in the spectra of carbon stars, with a view to including these distribution functions in equilibrium model atmosphere calculations. Comparisons to the CO molecule are also shown. T he computation of the monochromatic absorption coefficient uses the most recent molecular data with revision of the oscillator strengths for some of the band systems. The total molecular stellar mass absorption coefficient is established through fifteen equations of molecular dissociation equilibrium to relate the distribution functions to each other on a per gram of stellar material basis.

  19. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  20. Measurement of the top quark mass using template methods on dilepton events in p anti-p collisions at s**(1/2) = 1.96-TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abulencia, A.; Acosta, D.; Adelman, Jahred A.

    2006-02-01

    The authors describe a measurement of the top quark mass from events produced in p{bar p} collisions at a center-of-mass energy of 1.96 TeV, using the Collider Detector at Fermilab. They identify t{bar t} candidates where both W bosons from the top quarks decay into leptons (e{nu}, {mu}{nu}, or {tau}{nu}) from a data sample of 360 pb{sup -1}. The top quark mass is reconstructed in each event separately by three different methods, which draw upon simulated distributions of the neutrino pseudorapidity, t{bar t} longitudinal momentum, or neutrino azimuthal angle in order to extract probability distributions for the top quark mass.more » For each method, representative mass distributions, or templates, are constructed from simulated samples of signal and background events, and parameterized to form continuous probability density functions. A likelihood fit incorporating these parameterized templates is then performed on the data sample masses in order to derive a final top quark mass. Combining the three template methods, taking into account correlations in their statistical and systematic uncertainties, results in a top quark mass measurement of 170.1 {+-} 6.0(stat.) {+-} 4.1(syst.) GeV/c{sup 2}.« less

  1. Strong lensing probability in TeVeS (tensor-vector-scalar) theory

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ming

    2008-01-01

    We recalculate the strong lensing probability as a function of the image separation in TeVeS (tensor-vector-scalar) cosmology, which is a relativistic version of MOND (MOdified Newtonian Dynamics). The lens is modeled by the Hernquist profile. We assume an open cosmology with Ωb = 0.04 and ΩΛ = 0.5 and three different kinds of interpolating functions. Two different galaxy stellar mass functions (GSMF) are adopted: PHJ (Panter, Heavens and Jimenez 2004 Mon. Not. R. Astron. Soc. 355 764) determined from SDSS data release 1 and Fontana (Fontana et al 2006 Astron. Astrophys. 459 745) from GOODS-MUSIC catalog. We compare our results with both the predicted probabilities for lenses from singular isothermal sphere galaxy halos in LCDM (Lambda cold dark matter) with a Schechter-fit velocity function, and the observational results for the well defined combined sample of the Cosmic Lens All-Sky Survey (CLASS) and Jodrell Bank/Very Large Array Astrometric Survey (JVAS). It turns out that the interpolating function μ(x) = x/(1+x) combined with Fontana GSMF matches the results from CLASS/JVAS quite well.

  2. Density distribution function of a self-gravitating isothermal compressible turbulent fluid in the context of molecular clouds ensembles

    NASA Astrophysics Data System (ADS)

    Donkov, Sava; Stefanov, Ivan Z.

    2018-03-01

    We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.

  3. Fundamental Studies of Molecular Secondary Ion Mass Spectrometry Ionization Probability Measured With Femtosecond, Infrared Laser Post-Ionization

    NASA Astrophysics Data System (ADS)

    Popczun, Nicholas James

    The work presented in this dissertation is focused on increasing the fundamental understanding of molecular secondary ion mass spectrometry (SIMS) ionization probability by measuring neutral molecule behavior with femtosecond, mid-infrared laser post-ionization (LPI). To accomplish this, a model system was designed with a homogeneous organic film comprised of coronene, a polycyclic hydrocarbon which provides substantial LPI signal. Careful consideration was given to signal lost to photofragmentation and undersampling of the sputtered plume that is contained within the extraction volume of the mass spectrometer. This study provided the first ionization probability for an organic compound measured directly by the relative secondary ions and sputtered neutral molecules using a strong-field ionization (SFI) ionization method. The measured value of ˜10-3 is near the upper limit of previous estimations of ionization probability for organic molecules. The measurement method was refined, and then applied to a homogeneous guanine film, which produces protonated secondary ions. This measurement found the probability of protonation to occur to be on the order of 10-3, although with less uncertainty than that of the coronene. Finally, molecular depth profiles were obtained for SIMS and LPI signals as a function of primary ion fluence to determine the effect of ionization probability on the depth resolution of chemical interfaces. The interfaces chosen were organic/inorganic interfaces to limit chemical mixing. It is shown that approaching the inorganic chemical interface can enhance or suppress the ionization probability for the organic molecule, which can lead to artificially sharpened or broadened depths, respectively. Overall, the research described in this dissertation provides new methods for measuring ionization efficiency in SIMS in both absolute and relative terms, and will inform both innovation in the technique, as well as increase understanding of depth-dependent experiments.

  4. The Lateral Trigger Probability function for the Ultra-High Energy Cosmic Ray showers detected by the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Anzalone, A.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; Benzvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Cotti, U.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Domenico, M.; de Donato, C.; de Jong, S. J.; de La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; de Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; Del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Etchegoyen, A.; Facal San Luis, P.; Fajardo Tapia, I.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Guzman, A.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jarne, C.; Jiraskova, S.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mićanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Moura, C. A.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Nhung, P. T.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parsons, R. D.; Pastor, S.; Paul, T.; Pech, M.; Pȩkala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Robledo, C.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-D'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Salamida, F.; Salazar, H.; Salina, G.; Sánchez, F.; Santander, M.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Schmidt, T.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stapleton, J.; Stasielak, J.; Stephan, M.; Strazzeri, E.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tamashiro, A.; Tapia, A.; Tartare, M.; Taşcău, O.; Tavera Ruiz, C. G.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winders, L.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Ziolkowski, M.

    2011-12-01

    In this paper we introduce the concept of Lateral Trigger Probability (LTP) function, i.e., the probability for an Extensive Air Shower (EAS) to trigger an individual detector of a ground based array as a function of distance to the shower axis, taking into account energy, mass and direction of the primary cosmic ray. We apply this concept to the surface array of the Pierre Auger Observatory consisting of a 1.5 km spaced grid of about 1600 water Cherenkov stations. Using Monte Carlo simulations of ultra-high energy showers the LTP functions are derived for energies in the range between 1017 and 1019 eV and zenith angles up to 65°. A parametrization combining a step function with an exponential is found to reproduce them very well in the considered range of energies and zenith angles. The LTP functions can also be obtained from data using events simultaneously observed by the fluorescence and the surface detector of the Pierre Auger Observatory (hybrid events). We validate the Monte Carlo results showing how LTP functions from data are in good agreement with simulations.

  5. The insignificant evolution of the richness-mass relation of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Andreon, S.; Congdon, P.

    2014-08-01

    We analysed the richness-mass scaling of 23 very massive clusters at 0.15 < z < 0.55 with homogenously measured weak-lensing masses and richnesses within a fixed aperture of 0.5 Mpc radius. We found that the richness-mass scaling is very tight (the scatter is <0.09 dex with 90% probability) and independent of cluster evolutionary status and morphology. This implies a close association between infall and evolution of dark matter and galaxies in the central region of clusters. We also found that the evolution of the richness-mass intercept is minor at most, and, given the minor mass evolution across the studied redshift range, the richness evolution of individual massive clusters also turns out to be very small. Finally, it was paramount to account for the cluster mass function and the selection function. Ignoring them would lead to larger biases than the (otherwise quoted) errors. Our study benefits from: a) weak-lensing masses instead of proxy-based masses thereby removing the ambiguity between a real trend and one induced by an accounted evolution of the used mass proxy; b) the use of projected masses that simplify the statistical analysis thereby not requiring consideration of the unknown covariance induced by the cluster orientation/triaxiality; c) the use of aperture masses as they are free of the pseudo-evolution of mass definitions anchored to the evolving density of the Universe; d) a proper accounting of the sample selection function and of the Malmquist-like effect induced by the cluster mass function; e) cosmological simulations for the computation of the cluster mass function, its evolution, and the mass growth of each individual cluster.

  6. Landscape of little hierarchy

    NASA Astrophysics Data System (ADS)

    Dutta, Bhaskar; Mimura, Yukihiro

    2007-05-01

    We investigate the little hierarchy between Z boson mass and the SUSY breaking scale in the context of landscape of electroweak symmetry breaking vacua. We consider the radiative symmetry breaking and found that the scale where the electroweak symmetry breaking conditions are satisfied and the average stop mass scale is preferred to be very close to each other in spite of the fact that their origins depend on different parameters of the model. If the electroweak symmetry breaking scale is fixed at about 1 TeV by the supersymmetry model parameters then the little hierarchy seems to be preferred among the electroweak symmetry breaking vacua. We characterize the little hierarchy by a probability function and the mSUGRA model is used as an example to show the 90% and 95% probability contours in the experimentally allowed region. We also investigate the size of the Higgsino mass μ by considering the distribution of electroweak symmetry breaking scale.

  7. The He I 2.06 microns/Br-gamma ratio in starburst galaxies - An objective constraint on the upper mass limit to the initial mass function

    NASA Technical Reports Server (NTRS)

    Doyon, Rene; Puxley, P. J.; Joseph, R. D.

    1992-01-01

    The use of the He I 2.06 microns/Br-gamma ratio as a constraint on the massive stellar population in star-forming galaxies is developed. A theoretical relationship between the He I 2.06 microns/Br-gamma ratio and the effective temperature of the exciting star in H II regions is derived. The effects of collisional excitation and dust within the nebula on the ratio are also considered. It is shown that the He I 2.06 microns/Br-gamma ratio is a steep function of the effective temperature, a property which can be used to determine the upper mass limit of the initial mass function (IMF) in galaxies. This technique is reliable for upper mass limits less than about 40 solar masses. New near-infrared spectra of starburst galaxies are presented. The He I 2.06 microns/Br-gamma ratios observed imply a range of upper mass limits from 27 to over 40 solar masses. There is also evidence that the upper mass limit is spatially dependent within a given galaxy. These results suggest that the upper mass limit is not a uniquely defined parameter of the IMF and probably varies with local physical conditions.

  8. Testing anthropic reasoning for the cosmological constant with a realistic galaxy formation model

    NASA Astrophysics Data System (ADS)

    Sudoh, Takahiro; Totani, Tomonori; Makiya, Ryu; Nagashima, Masahiro

    2017-01-01

    The anthropic principle is one of the possible explanations for the cosmological constant (Λ) problem. In previous studies, a dark halo mass threshold comparable with our Galaxy must be assumed in galaxy formation to get a reasonably large probability of finding the observed small value, P(<Λobs), though stars are found in much smaller galaxies as well. Here we examine the anthropic argument by using a semi-analytic model of cosmological galaxy formation, which can reproduce many observations such as galaxy luminosity functions. We calculate the probability distribution of Λ by running the model code for a wide range of Λ, while other cosmological parameters and model parameters for baryonic processes of galaxy formation are kept constant. Assuming that the prior probability distribution is flat per unit Λ, and that the number of observers is proportional to stellar mass, we find P(<Λobs) = 6.7 per cent without introducing any galaxy mass threshold. We also investigate the effect of metallicity; we find P(<Λobs) = 9.0 per cent if observers exist only in galaxies whose metallicity is higher than the solar abundance. If the number of observers is proportional to metallicity, we find P(<Λobs) = 9.7 per cent. Since these probabilities are not extremely small, we conclude that the anthropic argument is a viable explanation, if the value of Λ observed in our Universe is determined by a probability distribution.

  9. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  10. Counts of galaxy clusters as cosmological probes: the impact of baryonic physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaguera-Antolínez, Andrés; Porciani, Cristiano, E-mail: abalan@astro.uni-bonn.de, E-mail: porciani@astro.uni-bonn.de

    2013-04-01

    The halo mass function from N-body simulations of collisionless matter is generally used to retrieve cosmological parameters from observed counts of galaxy clusters. This neglects the observational fact that the baryonic mass fraction in clusters is a random variable that, on average, increases with the total mass (within an overdensity of 500). Considering a mock catalog that includes tens of thousands of galaxy clusters, as expected from the forthcoming generation of surveys, we show that the effect of a varying baryonic mass fraction will be observable with high statistical significance. The net effect is a change in the overall normalizationmore » of the cluster mass function and a milder modification of its shape. Our results indicate the necessity of taking into account baryonic corrections to the mass function if one wants to obtain unbiased estimates of the cosmological parameters from data of this quality. We introduce the formalism necessary to accomplish this goal. Our discussion is based on the conditional probability of finding a given value of the baryonic mass fraction for clusters of fixed total mass. Finally, we show that combining information from the cluster counts with measurements of the baryonic mass fraction in a small subsample of clusters (including only a few tens of objects) will nearly optimally constrain the cosmological parameters.« less

  11. The Seven Sisters DANCe. I. Empirical isochrones, luminosity, and mass functions of the Pleiades cluster

    NASA Astrophysics Data System (ADS)

    Bouy, H.; Bertin, E.; Sarro, L. M.; Barrado, D.; Moraux, E.; Bouvier, J.; Cuillandre, J.-C.; Berihuete, A.; Olivares, J.; Beletsky, Y.

    2015-05-01

    Context. The DANCe survey provides photometric and astrometric (position and proper motion) measurements for approximately 2 million unique sources in a region encompassing ~80 deg2 centered on the Pleiades cluster. Aims: We aim at deriving a complete census of the Pleiades and measure the mass and luminosity functions of the cluster. Methods: Using the probabilistic selection method previously described, we identified high probability members in the DANCe (i ≥ 14 mag) and Tycho-2 (V ≲ 12 mag) catalogues and studied the properties of the cluster over the corresponding luminosity range. Results: We find a total of 2109 high-probability members, of which 812 are new, making it the most extensive and complete census of the cluster to date. The luminosity and mass functions of the cluster are computed from the most massive members down to ~0.025 M⊙. The size, sensitivity, and quality of the sample result in the most precise luminosity and mass functions observed to date for a cluster. Conclusions: Our census supersedes previous studies of the Pleiades cluster populations, in terms of both sensitivity and accuracy. Based on service observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.Table 1 and Appendices are available in electronic form at http://www.aanda.orgDANCe catalogs (Tables 6 and 7) and full Tables 2-5 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/577/A148

  12. The orbit of the Cepheid AW Per

    NASA Technical Reports Server (NTRS)

    Evans, Nancy Remage; Welch, Douglas L.

    1988-01-01

    An orbit for the classical Cepheid AW Per was derived. Phase residuals from the light curve are consistent with the light-time effect from the orbit. The companion was studied using IUE spectra. The flux distribution from 1300 to 1700 A is unusual, probably an extreme PbSi star, comparable to a B7V or B8V star. The flux of the composite spectrum from 1200 A through V is well matched by F7Ib and B8V standard stars with Delta M(sub upsilon) = 3(m) multiplied by 1. The mass function from the orbit indicates that the mass of the Cepheid must be greater that 4.7 solar mass if it is the more massive component. A B7V to B8V companion is compatible with the 1 sigma lower limit (3.5 solar mass) from the mass function. This implies that the Cepheid has the same mass, but the large magnitude difference rules this out. It is likely that the companion is itself a binary.

  13. Probability density function approach for compressible turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.

    1994-01-01

    The objective of the present work is to extend the probability density function (PDF) tubulence model to compressible reacting flows. The proability density function of the species mass fractions and enthalpy are obtained by solving a PDF evolution equation using a Monte Carlo scheme. The PDF solution procedure is coupled with a compression finite-volume flow solver which provides the velocity and pressure fields. A modeled PDF equation for compressible flows, capable of treating flows with shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed. Two super sonic diffusion flames are studied using the proposed PDF model and the results are compared with experimental data; marked improvements over solutions without PDF are observed.

  14. A Novel Strategy for Numerical Simulation of High-speed Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Sheikhi, M. R. H.; Drozda, T. G.; Givi, P.

    2003-01-01

    The objective of this research is to improve and implement the filtered mass density function (FDF) methodology for large eddy simulation (LES) of high-speed reacting turbulent flows. We have just completed Year 1 of this research. This is the Final Report on our activities during the period: January 1, 2003 to December 31, 2003. 2002. In the efforts during the past year, LES is conducted of the Sandia Flame D, which is a turbulent piloted nonpremixed methane jet flame. The subgrid scale (SGS) closure is based on the scalar filtered mass density function (SFMDF) methodology. The SFMDF is basically the mass weighted probability density function (PDF) of the SGS scalar quantities. For this flame (which exhibits little local extinction), a simple flamelet model is used to relate the instantaneous composition to the mixture fraction. The modelled SFMDF transport equation is solved by a hybrid finite-difference/Monte Carlo scheme.

  15. Strong lensing probability in TeVeS (tensor-vector-scalar) theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Daming, E-mail: cdm@bao.ac.cn

    2008-01-15

    We recalculate the strong lensing probability as a function of the image separation in TeVeS (tensor-vector-scalar) cosmology, which is a relativistic version of MOND (MOdified Newtonian Dynamics). The lens is modeled by the Hernquist profile. We assume an open cosmology with {Omega}{sub b} = 0.04 and {Omega}{sub {Lambda}} = 0.5 and three different kinds of interpolating functions. Two different galaxy stellar mass functions (GSMF) are adopted: PHJ (Panter, Heavens and Jimenez 2004 Mon. Not. R. Astron. Soc. 355 764) determined from SDSS data release 1 and Fontana (Fontana et al 2006 Astron. Astrophys. 459 745) from GOODS-MUSIC catalog. We comparemore » our results with both the predicted probabilities for lenses from singular isothermal sphere galaxy halos in LCDM (Lambda cold dark matter) with a Schechter-fit velocity function, and the observational results for the well defined combined sample of the Cosmic Lens All-Sky Survey (CLASS) and Jodrell Bank/Very Large Array Astrometric Survey (JVAS). It turns out that the interpolating function {mu}(x) = x/(1+x) combined with Fontana GSMF matches the results from CLASS/JVAS quite well.« less

  16. Large deviation principle at work: Computation of the statistical properties of the exact one-point aperture mass

    NASA Astrophysics Data System (ADS)

    Reimberg, Paulo; Bernardeau, Francis

    2018-01-01

    We present a formalism based on the large deviation principle (LDP) applied to cosmological density fields, and more specifically to the arbitrary functional of density profiles, and we apply it to the derivation of the cumulant generating function and one-point probability distribution function (PDF) of the aperture mass (Map ), a common observable for cosmic shear observations. We show that the LDP can indeed be used in practice for a much larger family of observables than previously envisioned, such as those built from continuous and nonlinear functionals of density profiles. Taking advantage of this formalism, we can extend previous results, which were based on crude definitions of the aperture mass, with top-hat windows and the use of the reduced shear approximation (replacing the reduced shear with the shear itself). We were precisely able to quantify how this latter approximation affects the Map statistical properties. In particular, we derive the corrective term for the skewness of the Map and reconstruct its one-point PDF.

  17. Parasitism alters three power laws of scaling in a metazoan community: Taylor’s law, density-mass allometry, and variance-mass allometry

    PubMed Central

    Lagrue, Clément; Poulin, Robert; Cohen, Joel E.

    2015-01-01

    How do the lifestyles (free-living unparasitized, free-living parasitized, and parasitic) of animal species affect major ecological power-law relationships? We investigated this question in metazoan communities in lakes of Otago, New Zealand. In 13,752 samples comprising 1,037,058 organisms, we found that species of different lifestyles differed in taxonomic distribution and body mass and were well described by three power laws: a spatial Taylor’s law (the spatial variance in population density was a power-law function of the spatial mean population density); density-mass allometry (the spatial mean population density was a power-law function of mean body mass); and variance-mass allometry (the spatial variance in population density was a power-law function of mean body mass). To our knowledge, this constitutes the first empirical confirmation of variance-mass allometry for any animal community. We found that the parameter values of all three relationships differed for species with different lifestyles in the same communities. Taylor's law and density-mass allometry accurately predicted the form and parameter values of variance-mass allometry. We conclude that species of different lifestyles in these metazoan communities obeyed the same major ecological power-law relationships but did so with parameters specific to each lifestyle, probably reflecting differences among lifestyles in population dynamics and spatial distribution. PMID:25550506

  18. Parasitism alters three power laws of scaling in a metazoan community: Taylor's law, density-mass allometry, and variance-mass allometry.

    PubMed

    Lagrue, Clément; Poulin, Robert; Cohen, Joel E

    2015-02-10

    How do the lifestyles (free-living unparasitized, free-living parasitized, and parasitic) of animal species affect major ecological power-law relationships? We investigated this question in metazoan communities in lakes of Otago, New Zealand. In 13,752 samples comprising 1,037,058 organisms, we found that species of different lifestyles differed in taxonomic distribution and body mass and were well described by three power laws: a spatial Taylor's law (the spatial variance in population density was a power-law function of the spatial mean population density); density-mass allometry (the spatial mean population density was a power-law function of mean body mass); and variance-mass allometry (the spatial variance in population density was a power-law function of mean body mass). To our knowledge, this constitutes the first empirical confirmation of variance-mass allometry for any animal community. We found that the parameter values of all three relationships differed for species with different lifestyles in the same communities. Taylor's law and density-mass allometry accurately predicted the form and parameter values of variance-mass allometry. We conclude that species of different lifestyles in these metazoan communities obeyed the same major ecological power-law relationships but did so with parameters specific to each lifestyle, probably reflecting differences among lifestyles in population dynamics and spatial distribution.

  19. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    PubMed

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  20. Confidence level estimation in multi-target classification problems

    NASA Astrophysics Data System (ADS)

    Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia

    2018-04-01

    This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.

  1. Massive, wide binaries as tracers of massive star formation

    NASA Astrophysics Data System (ADS)

    Griffiths, Daniel W.; Goodwin, Simon P.; Caballero-Nieves, Saida M.

    2018-05-01

    Massive stars can be found in wide (hundreds to thousands au) binaries with other massive stars. We use N-body simulations to show that any bound cluster should always have approximately one massive wide binary: one will probably form if none are present initially, and probably only one will survive if more than one is present initially. Therefore, any region that contains many massive wide binaries must have been composed of many individual subregions. Observations of Cyg OB2 show that the massive wide binary fraction is at least a half (38/74), which suggests that Cyg OB2 had at least 30 distinct massive star formation sites. This is further evidence that Cyg OB2 has always been a large, low-density association. That Cyg OB2 has a normal high-mass initial mass function (IMF) for its total mass suggests that however massive stars form, they `randomly sample' the IMF (as the massive stars did not `know' about each other).

  2. Photofission of 197Au and 209Bi at intermediate energies

    NASA Astrophysics Data System (ADS)

    Haba, H.; Sakamoto, K.; Igarashi, M.; Kasaoka, M.; Washiyama, K.; Matsumura, H.; Oura, Y.; Shibata, S.; Furukawa, M.; Fujiwara, I.

    2003-01-01

    Recoil properties and yields of radionuclides formed in the photofission of 197Au and 209Bi by bremsstrahlung of end-point energies ( E 0) from 300 to 1100 MeV have been investigated using the thick-target thick-catcher method. The kinetic energies T of the residual nuclei were deduced based on the two-step vector model and discussed by comparing with the reported results on protoninduced reactions as well as those on photospallation. The charge distribution was reproduced by a Gaussian function with the most probable charge Zp expressed by a linera function of the product mass number A and with the A-independent width FWHM CD. Based on the charge distribution parameters, the symmetric mass yield distribution with the most probable mass A p of 92 m.u. and the width FWHM MD of 39 m.u. was obtained for 197Au at E 0≥600 MeV. The A p value for 209Bi was larger by 4 m.u. than that for 197Au and the FWHM MD was smaller by 6 m.u. A comparison with the calculations using the Photon-induced Intranuclear Cascade Analysis 3 code combined with the Generalized Evaporation Model code (PICA3/GEM) was also performed.

  3. Giant Planet Occurrence Rate as a Function of Stellar Mass

    NASA Astrophysics Data System (ADS)

    Reffert, Sabine; Bergmann, Christoph; Quirrenbach, Andreas; Trifonov, Trifon; Künstler, Andreas

    2013-07-01

    For over 12 years we have carried out a Doppler survey at Lick Observatory, identifying 15 planets and 20 candidate planets in a sample of 373 G and K giant stars. We investigate giant planet occurrence rate as a function of stellar mass and metallicity in this sample, which covers the mass range from about 1 to 3.5-5.0 solar masses. We confirm the presence of a strong planet-metallicity correlation in our giant star sample, which is fully consistent with the well-known planet-metallicity correlation for main-sequence stars. Furthermore, we find a very strong dependence of the giant planet occurrence rate on stellar mass, which we fit with a gaussian distribution. Stars with masses of about 1.9 solar masses have the highest probability of hosting a giant planet, whereas the planet occurrence rate drops rapidly for masses larger than 2.5 to 3.0 solar masses. We do not find any planets around stars more massive than 2.7 solar masses, although we have 113 stars with masses between 2.7 and 5.0 solar masses in our sample (planet occurrence rate in that mass range: 0% +1.6% at 68.3% confidence). This result is not due to a bias related to planet detectability as a function of stellar mass. We conclude that larger mass stars do not form giant planets which are observable at orbital distances of a few AU today. Possible reasons include slower growth rate due to the snow-line being located further out, longer migration timescale and faster disk depletion.

  4. VizieR Online Data Catalog: The Seven Sisters DANCe. I. Pleiades (Bouy+, 2015)

    NASA Astrophysics Data System (ADS)

    Bouy, H.; Bertin, E.; Sarro, L. M.; Barrado, D.; Moraux, E.; Bouvier, J.; Cuillandre, J.-C.; Berihuete, A.; Olivares, J.; Beletsky, Y.

    2015-02-01

    Position, proper motion, multi-wavelength ugrizYJHK photometry and membership probability to the Pleiades cluster for 1972245 sources. Present-day system bolometric luminosity and mass-functions of the Pleiades cluster. Empirical sequence of the Pleiades cluster in ugrizYJHK and BT,VT,JHK photometric systems. (7 data files).

  5. DETAIL VIEW, WEST WALL OF THE WESTERN STOREROOM. THE MASONRY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW, WEST WALL OF THE WESTERN STOREROOM. THE MASONRY HEARTH SUPPORT AND RELIEVING ARCH FOR A CHIMNEY MASS PROBABLY NEVER FUNCTIONED AS ENVISIONED, RATHER THEY ARE LIKELY A REMNANT OF A BUILDING SCHEME ABANDONED DURING THE HOUSE’S INITIAL CONSTRUCTION - The Woodlands, 4000 Woodlands Avenue, Philadelphia, Philadelphia County, PA

  6. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaltonen, T.

    We search for resonant production of tt pairs in 4.8 fb -1 integrated luminosity of pp collision data at √s = 1.96 TeV in the lepton+jets decay channel, where one top quark decays leptonically and the other hadronically. A matrix element reconstruction technique is used; for each event a probability density function (pdf) of the tt candidate invariant mass is sampled. These pdfs are used to construct a likelihood function, whereby the cross section for resonant tt production is estimated, given a hypothetical resonance mass and width. The data indicate no evidence of resonant production of tt pairs. A benchmarkmore » model of leptophobic Z' → tt is excluded with m Z' < 900 GeV at 95% confidence level.« less

  8. Generalization of multifractal theory within quantum calculus

    NASA Astrophysics Data System (ADS)

    Olemskoi, A.; Shuda, I.; Borisyuk, V.

    2010-03-01

    On the basis of the deformed series in quantum calculus, we generalize the partition function and the mass exponent of a multifractal, as well as the average of a random variable distributed over a self-similar set. For the partition function, such expansion is shown to be determined by binomial-type combinations of the Tsallis entropies related to manifold deformations, while the mass exponent expansion generalizes the known relation τq=Dq(q-1). We find the equation for the set of averages related to ordinary, escort, and generalized probabilities in terms of the deformed expansion as well. Multifractals related to the Cantor binomial set, exchange currency series, and porous-surface condensates are considered as examples.

  9. Dung beetles as drivers of ecosystem multifunctionality: Are response and effect traits interwoven?

    PubMed

    Piccini, Irene; Nervo, Beatrice; Forshage, Mattias; Celi, Luisella; Palestrini, Claudia; Rolando, Antonio; Roslin, Tomas

    2018-03-01

    Rapid biodiversity loss has emphasized the need to understand how biodiversity affects the provisioning of ecological functions. Of particular interest are species and communities with versatile impacts on multiple parts of the environment, linking processes in the biosphere, lithosphere, and atmosphere to human interests in the anthroposphere (in this case, cattle farming). In this study, we examine the role of a specific group of insects - beetles feeding on cattle dung - on multiple ecological functions spanning these spheres (dung removal, soil nutrient content and greenhouse gas emissions). We ask whether the same traits which make species prone to extinction (i.e. response traits) may also affect their functional efficiency (as effect traits). To establish the link between response and effect traits, we first evaluated whether two traits (body mass and nesting strategy, the latter categorized as tunnelers or dwellers) affected the probability of a species being threatened. We then tested for a relationship between these traits and ecosystem functioning. Across Scandinavian dung beetle species, 75% of tunnelers and 30% of dwellers are classified as threatened. Hence, nesting strategy significantly affects the probability of a species being threatened, and constitutes a response trait. Effect traits varied with the ecological function investigated: density-specific dung removal was influenced by both nesting strategy and body mass, whereas methane emissions varied with body mass and nutrient recycling with nesting strategy. Our findings suggest that among Scandinavian dung beetles, nesting strategy is both a response and an effect trait, with tunnelers being more efficient in providing several ecological functions and also being more sensitive to extinction. Consequently, functionally important tunneler species have suffered disproportionate declines, and species not threatened today may be at risk of becoming so in the near future. This linkage between effect and response traits aggravates the consequences of ongoing biodiversity loss. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The nuclear size and mass effects on muonic hydrogen-like atoms embedded in Debye plasma

    NASA Astrophysics Data System (ADS)

    Poszwa, A.; Bahar, M. K.; Soylu, A.

    2016-10-01

    Effects of finite nuclear size and finite nuclear mass are investigated for muonic atoms and muonic ions embedded in the Debye plasma. Both nuclear charge radii and nuclear masses are taken into account with experimentally determined values. In particular, isotope shifts of bound state energies, radial probability densities, transition energies, and binding energies for several atoms are studied as functions of Debye length. The theoretical model based on semianalytical calculations, the Sturmian expansion method, and the perturbative approach has been constructed, in the nonrelativistic frame. For some limiting cases, the comparison with previous most accurate literature results has been made.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adelman, Jahred A.

    A measurement of the top quark mass in pmore » $$\\bar{p}$$ collisions at √s = 1.96 TeV is presented. The analysis uses a template method, in which the overconstrained kinematics of the Lepton+Jets channel of the t$$\\bar{t}$$ system are used to measure a single quantity, the reconstructed top quark mass, that is strongly correlated with the true top quark mass. in addition, the dijet mass of the hadronically decaying W boson is used to constrain in situ the uncertain jet energy scale in the CDF detector. Two-dimensional probability density functions are derived using a kernel density estimate-based machinery. Using 1.9 fb -1 of data, the top quark mass is measured to be 171.8$$+1.9\\atop{-1.9}$$(stat.) ± 1.0(syst.)GeV/c 2.« less

  12. Adaptive Optics Near-Infrared Imaging of R136 in 30 Doradus: The Stellar Population of a Nearby Starburst

    NASA Astrophysics Data System (ADS)

    Brandl, B.; Sams, B. J.; Bertoldi, F.; Eckart, A.; Genzel, R.; Drapatz, S.; Hofmann, R.; Loewe, M.; Quirrenbach, A.

    1996-07-01

    We report 0".15 resolution near-infrared (NIR) imaging of R136, the central region of 30 Doradus in the large Magellanic Cloud. Our 12".8 x 12".8 images were recorded with the MPE camera SHARP II at the 3.6 m ESO telescope, using the adaptive optics system COME ON+. The high spatial resolution and sensitivity (20th magnitude in K) of our observations allow our H- and K-band images to be compared and combined with recent Hubble Space Telescope (HST) WFPC2 data of R136. We fit theoretical models with variable foreground extinction to the observed magnitudes of ˜1000 stars (roughly half of which were detected in HST and NIR bands) and derive the stellar population in this starburst region. We find no red giants or supergiants; however, we detect ˜110 extremely red sources which are probably young, pre-main-sequence low- or intermediate-mass stars. We obtained narrow-band images to identify known and new Wolf-Rayet stars by their He 11(2.189 μm) and Bry (2.166 μm) emission lines. The presence of W-R stars and absence of red supergiants narrow the cluster age to ˜3-5 Myr, while the derived ratio of W-R to 0 stars of 0.05 in the central region favors an age of 3.5 Myr, with a relatively short starburst duration. For the 0 stars, the core radius is found to be 0.1 pc and appears to decrease with increasing stellar mass. The slope of the mass function function is Γ = -1.6 on average, but it steepens with increasing distance from the cluster center from Γ = -1.3 in the inner 0.4 pc to Γ = -2.2 outside 0.8 pc for stars more massive than 12 Msun. The radial variation of the mass function reveals strong mass segregation that is probably due to the cluster's dynamical evolution.

  13. On the probability distribution function of the mass surface density of molecular clouds. II.

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-11-01

    The probability distribution function (PDF) of the mass surface density of molecular clouds provides essential information about the structure of molecular cloud gas and condensed structures out of which stars may form. In general, the PDF shows two basic components: a broad distribution around the maximum with resemblance to a log-normal function, and a tail at high mass surface densities attributed to turbulence and self-gravity. In a previous paper, the PDF of condensed structures has been analyzed and an analytical formula presented based on a truncated radial density profile, ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 with central density ρc and inner radius r0, widely used in astrophysics as a generalization of physical density profiles. In this paper, the results are applied to analyze the PDF of self-gravitating, isothermal, pressurized, spherical (Bonnor-Ebert spheres) and cylindrical condensed structures with emphasis on the dependence of the PDF on the external pressure pext and on the overpressure q-1 = pc/pext, where pc is the central pressure. Apart from individual clouds, we also consider ensembles of spheres or cylinders, where effects caused by a variation of pressure ratio, a distribution of condensed cores within a turbulent gas, and (in case of cylinders) a distribution of inclination angles on the mean PDF are analyzed. The probability distribution of pressure ratios q-1 is assumed to be given by P(q-1) ∝ q-k1/ (1 + (q0/q)γ)(k1 + k2) /γ, where k1, γ, k2, and q0 are fixed parameters. The PDF of individual spheres with overpressures below ~100 is well represented by the PDF of a sphere with an analytical density profile with n = 3. At higher pressure ratios, the PDF at mass surface densities Σ ≪ Σ(0), where Σ(0) is the central mass surface density, asymptotically approaches the PDF of a sphere with n = 2. Consequently, the power-law asymptote at mass surface densities above the peak steepens from Psph(Σ) ∝ Σ-2 to Psph(Σ) ∝ Σ-3. The corresponding asymptote of the PDF of cylinders for the large q-1 is approximately given by Pcyl(Σ) ∝ Σ-4/3(1 - (Σ/Σ(0))2/3)-1/2. The distribution of overpressures q-1 produces a power-law asymptote at high mass surface densities given by ∝ Σ-2k2 - 1 (spheres) or ∝ Σ-2k2 (cylinders). Appendices are available in electronic form at http://www.aanda.org

  14. Uncertainties in Galactic Chemical Evolution Models

    DOE PAGES

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.; ...

    2016-06-15

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  16. A hydrodynamic treatment of the tilted cold dark matter cosmological scenario

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Ostriker, Jeremiah P.

    1993-01-01

    A standard hydrodynamic code coupled with a particle-mesh code is used to compute the evolution of a tilted cold dark matter (TCDM) model containing both baryonic matter and dark matter. Six baryonic species are followed, with allowance for both collisional and radiative ionization in every cell. The mean final Zel'dovich-Sunyaev y parameter is estimated to be (5.4 +/- 2.7) x 10 exp -7, below currently attainable observations, with an rms fluctuation of about (6.0 +/- 3.0) x 10 exp -7 on arcmin scales. The rate of galaxy formation peaks at a relatively late epoch (z is about 0.5). In the case of mass function, the smallest objects are stabilized against collapse by thermal energy: the mass-weighted mass spectrum peaks in the vicinity of 10 exp 9.1 solar masses, with a reasonable fit to the Schechter luminosity function if the baryon mass to blue light ratio is about 4. It is shown that a bias factor of 2 required for the model to be consistent with COBE DMR signals is probably a natural outcome in the present multiple component simulations.

  17. DECHADE: DEtecting slight Changes with HArd DEcisions in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Ciuonzo, D.; Salvo Rossi, P.

    2018-07-01

    This paper focuses on the problem of change detection through a Wireless Sensor Network (WSN) whose nodes report only binary decisions (on the presence/absence of a certain event to be monitored), due to bandwidth/energy constraints. The resulting problem can be modelled as testing the equality of samples drawn from independent Bernoulli probability mass functions, when the bit probabilities under both hypotheses are not known. Both One-Sided (OS) and Two-Sided (TS) tests are considered, with reference to: (i) identical bit probability (a homogeneous scenario), (ii) different per-sensor bit probabilities (a non-homogeneous scenario) and (iii) regions with identical bit probability (a block-homogeneous scenario) for the observed samples. The goal is to provide a systematic framework collecting a plethora of viable detectors (designed via theoretically founded criteria) which can be used for each instance of the problem. Finally, verification of the derived detectors in two relevant WSN-related problems is provided to show the appeal of the proposed framework.

  18. Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface

    NASA Astrophysics Data System (ADS)

    Liu, Tianhui; Fu, Bina; Zhang, Dong H.

    2014-04-01

    We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitation and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.

  19. Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Tianhui; Fu, Bina, E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H., E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn

    We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitationmore » and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.« less

  20. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  1. Multiple-solution problems in a statistics classroom: an example

    NASA Astrophysics Data System (ADS)

    Chu, Chi Wing; Chan, Kevin L. T.; Chan, Wai-Sum; Kwong, Koon-Shing

    2017-11-01

    The mathematics education literature shows that encouraging students to develop multiple solutions for given problems has a positive effect on students' understanding and creativity. In this paper, we present an example of multiple-solution problems in statistics involving a set of non-traditional dice. In particular, we consider the exact probability mass distribution for the sum of face values. Four different ways of solving the problem are discussed. The solutions span various basic concepts in different mathematical disciplines (sample space in probability theory, the probability generating function in statistics, integer partition in basic combinatorics and individual risk model in actuarial science) and thus promotes upper undergraduate students' awareness of knowledge connections between their courses. All solutions of the example are implemented using the R statistical software package.

  2. Plant litter functional diversity effects on litter mass loss depend on the macro-detritivore community.

    PubMed

    Patoine, Guillaume; Thakur, Madhav P; Friese, Julia; Nock, Charles; Hönig, Lydia; Haase, Josephine; Scherer-Lorenzen, Michael; Eisenhauer, Nico

    2017-11-01

    A better understanding of the mechanisms driving litter diversity effects on decomposition is needed to predict how biodiversity losses affect this crucial ecosystem process. In a microcosm study, we investigated the effects of litter functional diversity and two major groups of soil macro-detritivores on the mass loss of tree leaf litter mixtures. Furthermore, we tested the effects of litter trait community means and dissimilarity on litter mass loss for seven traits relevant to decomposition. We expected macro-detritivore effects on litter mass loss to be most pronounced in litter mixtures of high functional diversity. We used 24 leaf mixtures differing in functional diversity, which were composed of litter from four species from a pool of 16 common European tree species. Earthworms, isopods, or a combination of both were added to each litter combination for two months. Litter mass loss was significantly higher in the presence of earthworms than in that of isopods, whereas no synergistic effects of macro-detritivore mixtures were found. The effect of functional diversity of the litter material was highest in the presence of both macro-detritivore groups, supporting the notion that litter diversity effects are most pronounced in the presence of different detritivore species. Species-specific litter mass loss was explained by nutrient content, secondary compound concentration, and structural components. Moreover, dissimilarity in N concentrations increased litter mass loss, probably because detritivores having access to nutritionally diverse food sources. Furthermore, strong competition between the two macro-detritivores for soil surface litter resulted in a decrease of survival of both macro-detritivores. These results show that the effects of litter functional diversity on decomposition are contingent upon the macro-detritivore community and composition. We conclude that the temporal dynamics of litter trait diversity effects and their interaction with detritivore diversity are key to advancing our understanding of litter mass loss in nature.

  3. Plant litter functional diversity effects on litter mass loss depend on the macro-detritivore community

    PubMed Central

    Patoine, Guillaume; Thakur, Madhav P.; Friese, Julia; Nock, Charles; Hönig, Lydia; Haase, Josephine; Scherer-Lorenzen, Michael; Eisenhauer, Nico

    2017-01-01

    A better understanding of the mechanisms driving litter diversity effects on decomposition is needed to predict how biodiversity losses affect this crucial ecosystem process. In a microcosm study, we investigated the effects of litter functional diversity and two major groups of soil macro-detritivores on the mass loss of tree leaf litter mixtures. Furthermore, we tested the effects of litter trait community means and dissimilarity on litter mass loss for seven traits relevant to decomposition. We expected macro-detritivore effects on litter mass loss to be most pronounced in litter mixtures of high functional diversity. We used 24 leaf mixtures differing in functional diversity, which were composed of litter from four species from a pool of 16 common European tree species. Earthworms, isopods, or a combination of both were added to each litter combination for two months. Litter mass loss was significantly higher in the presence of earthworms than in that of isopods, whereas no synergistic effects of macro-detritivore mixtures were found. The effect of functional diversity of the litter material was highest in the presence of both macro-detritivore groups, supporting the notion that litter diversity effects are most pronounced in the presence of different detritivore species. Species-specific litter mass loss was explained by nutrient content, secondary compound concentration, and structural components. Moreover, dissimilarity in N concentrations increased litter mass loss, probably because detritivores having access to nutritionally diverse food sources. Furthermore, strong competition between the two macro-detritivores for soil surface litter resulted in a decrease of survival of both macro-detritivores. These results show that the effects of litter functional diversity on decomposition are contingent upon the macro-detritivore community and composition. We conclude that the temporal dynamics of litter trait diversity effects and their interaction with detritivore diversity are key to advancing our understanding of litter mass loss in nature. PMID:29180828

  4. Viterbi Tracking of Randomly Phase-Modulated Data (and Related Topics).

    DTIC Science & Technology

    1982-08-10

    odd (#I,, P2 ," • Denote the conditional probability mass 46, ) function of 0k, given Ak, by p(Ok/Ak). For the (4, 4) diagram of Fig. 2(d), i, j even...Professor Electrical Engineering LS:fr I. II Recent (Jutstandinq Acca plisuneil: / - -017(, July 12, 1982 The problem of FM divdulation has a long hi.;try (f

  5. Exponential fading to white of black holes in quantum gravity

    NASA Astrophysics Data System (ADS)

    Barceló, Carlos; Carballo-Rubio, Raúl; Garay, Luis J.

    2017-05-01

    Quantization of the gravitational field may allow the existence of a decay channel of black holes into white holes with an explicit time-reversal symmetry. The definition of a meaningful decay probability for this channel is studied in spherically symmetric situations. As a first nontrivial calculation, we present the functional integration over a set of geometries using a single-variable function to interpolate between black-hole and white-hole geometries in a bounded region of spacetime. This computation gives a finite result which depends only on the Schwarzschild mass and a parameter measuring the width of the interpolating region. The associated probability distribution displays an exponential decay law on the latter parameter, with a mean lifetime inversely proportional to the Schwarzschild mass. In physical terms this would imply that matter collapsing to a black hole from a finite radius bounces back elastically and instantaneously, with negligible time delay as measured by external observers. These results invite to reconsider the ultimate nature of astrophysical black holes, providing a possible mechanism for the formation of black stars instead of proper general relativistic black holes. The existence of both this decay channel and black stars can be tested in future observations of gravitational waves.

  6. Probability density function of a puff dispersing from the wall of a turbulent channel

    NASA Astrophysics Data System (ADS)

    Nguyen, Quoc; Papavassiliou, Dimitrios

    2015-11-01

    Study of dispersion of passive contaminants in turbulence has proved to be helpful in understanding fundamental heat and mass transfer phenomena. Many simulation and experimental works have been carried out to locate and track motions of scalar markers in a flow. One method is to combine Direct Numerical Simulation (DNS) and Lagrangian Scalar Tracking (LST) to record locations of markers. While this has proved to be useful, high computational cost remains a concern. In this study, we develop a model that could reproduce results obtained by DNS and LST for turbulent flow. Puffs of markers with different Schmidt numbers were released into a flow field at a frictional Reynolds number of 150. The point of release was at the channel wall, so that both diffusion and convection contribute to the puff dispersion pattern, defining different stages of dispersion. Based on outputs from DNS and LST, we seek the most suitable and feasible probability density function (PDF) that represents distribution of markers in the flow field. The PDF would play a significant role in predicting heat and mass transfer in wall turbulence, and would prove to be helpful where DNS and LST are not always available.

  7. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  8. Diastolic dysfunction is associated with insulin resistance, but not with aldosterone level in normotensive offspring of hypertensive families.

    PubMed

    Zizek, Bogomir; Poredos, Pavel; Trojar, Andrej; Zeljko, Tadej

    2008-01-01

    We investigated left ventricular (LV) morphology and function in association with insulin level/insulin resistance (IR) and aldosterone level in normotensive offspring of subjects with essential hypertension (familial trait, FT). The study encompassed 76 volunteers of whom 44 were normotensive with FT (aged 28-39 years) and 32 age-matched controls without FT. LV mass and function were measured using conventional echocardiography and tissue Doppler imaging. LV diastolic function was reported as peak septal annular velocities (E(m) and E(m)/A(m) ratio) in tissue Doppler imaging. Fasting insulin and aldosterone were determined. In subjects with FT, the LV mass was higher than in controls (92.14 +/- 24.02 vs. 70.08 +/- 20.58 g; p < 0.001). The study group had a worse LV diastolic function than control subjects (lower E(m) and E(m)/A(m) ratio; p < 0.001). In subjects with FT, the E(m)/A(m) ratio was independently associated with IR (partial p = 0.029 in multivariate model, R(2) = 0.51), but not with LV mass. The aldosterone level was comparable in both groups. In normotensive individuals with FT, LV morphological and functional abnormalities were found. LV dysfunction but not an increase in LV mass is associated with IR. The aldosterone level is probably not responsible for the development of early hypertensive heart disease. (c) 2008 S. Karger AG, Basel.

  9. THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parravano, Antonio; Sanchez, Nestor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloudmore » structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.« less

  10. The Dependence of Prestellar Core Mass Distributions on the Structure of the Parental Cloud

    NASA Astrophysics Data System (ADS)

    Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle & Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle & Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root {\\cal N} statistical fluctuations, increasing with H.

  11. Field emission electric propulsion thruster modeling and simulation

    NASA Astrophysics Data System (ADS)

    Vanderwyst, Anton Sivaram

    Electric propulsion allows space rockets a much greater range of capabilities with mass efficiencies that are 1.3 to 30 times greater than chemical propulsion. Field emission electric propulsion (FEEP) thrusters provide a specific design that possesses extremely high efficiency and small impulse bits. Depending on mass flow rate, these thrusters can emit both ions and droplets. To date, fundamental experimental work has been limited in FEEP. In particular, detailed individual droplet mechanics have yet to be understood. In this thesis, theoretical and computational investigations are conducted to examine the physical characteristics associated with droplet dynamics relevant to FEEP applications. Both asymptotic analysis and numerical simulations, based on a new approach combining level set and boundary element methods, were used to simulate 2D-planar and 2D-axisymmetric probability density functions of the droplets produced for a given geometry and electrode potential. The combined algorithm allows the simulation of electrostatically-driven liquids up to and after detachment. Second order accuracy in space is achieved using a volume of fluid correction. The simulations indicate that in general, (i) lowering surface tension, viscosity, and potential, or (ii) enlarging electrode rings, and needle tips reduce operational mass efficiency. Among these factors, surface tension and electrostatic potential have the largest impact. A probability density function for the mass to charge ratio (MTCR) of detached droplets is computed, with a peak around 4,000 atoms per electron. High impedance surfaces, strong electric fields, and large liquid surface tension result in a lower MTCR ratio, which governs FEEP droplet evolution via the charge on detached droplets and their corresponding acceleration. Due to the slow mass flow along a FEEP needle, viscosity is of less importance in altering the droplet velocities. The width of the needle, the composition of the propellant, the current and the mass efficiency are interrelated. The numerical simulations indicate that more electric power per Newton of thrust on a narrow needle with a thin, high surface tension fluid layer gives better performance.

  12. The devil is in the tails: the role of globular cluster mass evolution on stream properties

    NASA Astrophysics Data System (ADS)

    Balbinot, Eduardo; Gieles, Mark

    2018-02-01

    We present a study of the effects of collisional dynamics on the formation and detectability of cold tidal streams. A semi-analytical model for the evolution of the stellar mass function was implemented and coupled to a fast stellar stream simulation code, as well as the synthetic cluster evolution code EMACSS for the mass evolution as a function of a globular cluster orbit. We find that the increase in the average mass of the escaping stars for clusters close to dissolution has a major effect on the observable stream surface density. As an example, we show that Palomar 5 would have undetectable streams (in an SDSS-like survey) if it was currently three times more massive, despite the fact that a more massive cluster loses stars at a higher rate. This bias due to the preferential escape of low-mass stars is an alternative explanation for the absence of tails near massive clusters, than a dark matter halo associated with the cluster. We explore the orbits of a large sample of Milky Way globular clusters and derive their initial masses and remaining mass fraction. Using properties of known tidal tails, we explore regions of parameter space that favour the detectability of a stream. A list of high-probability candidates is discussed.

  13. A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds

    DOE PAGES

    Hagos, Samson; Feng, Zhe; Plant, Robert S.; ...

    2018-02-20

    A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The framework follows the nonequilibrium statistical mechanical approach to constructing a master equation for representing the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics of convective cells: (i) the probability of growth, (ii) the probability of decay, and (iii)more » the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and the cloud-base mass flux is a nonlinear function of convective cell area, the mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated cloud-base mass flux variability under diurnally varying forcing. Finally, in addition to its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to serve as a nonequilibrium closure formulations for spectral mass flux parameterizations.« less

  14. A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds

    NASA Astrophysics Data System (ADS)

    Hagos, Samson; Feng, Zhe; Plant, Robert S.; Houze, Robert A.; Xiao, Heng

    2018-02-01

    A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The framework follows the nonequilibrium statistical mechanical approach to constructing a master equation for representing the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics of convective cells: (i) the probability of growth, (ii) the probability of decay, and (iii) the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and the cloud-base mass flux is a nonlinear function of convective cell area, the mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated cloud-base mass flux variability under diurnally varying forcing. In addition to its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to serve as a nonequilibrium closure formulations for spectral mass flux parameterizations.

  15. A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson; Feng, Zhe; Plant, Robert S.

    A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The approach used follows the non-equilibrium statistical mechanical approach through a master equation. The aim is to represent the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics: (i) the probability of growth, (ii) the probability of decay, and (iii)more » the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and mass flux is a non-linear function of convective cell area, mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated mass flux variability under diurnally varying forcing. Besides its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to be capable of providing alternative, non-equilibrium, closure formulations for spectral mass flux parameterizations.« less

  16. A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson; Feng, Zhe; Plant, Robert S.

    A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The framework follows the nonequilibrium statistical mechanical approach to constructing a master equation for representing the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics of convective cells: (i) the probability of growth, (ii) the probability of decay, and (iii)more » the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and the cloud-base mass flux is a nonlinear function of convective cell area, the mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated cloud-base mass flux variability under diurnally varying forcing. Finally, in addition to its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to serve as a nonequilibrium closure formulations for spectral mass flux parameterizations.« less

  17. A wide deep infrared look at the Pleiades with UKIDSS: new constraints on the substellar binary fraction and the low-mass initial mass function

    NASA Astrophysics Data System (ADS)

    Lodieu, N.; Dobbie, P. D.; Deacon, N. R.; Hodgkin, S. T.; Hambly, N. C.; Jameson, R. F.

    2007-09-01

    We present the results of a deep wide-field near-infrared survey of 12 deg2 of the Pleiades conducted as part of the United Kingdom Infrared Telescope (UKIRT) Infrared Deep Sky Survey (UKIDSS) Galactic Cluster Survey (GCS). We have extracted over 340 high-probability proper motion (PM) members down to 0.03 Msolar using a combination of UKIDSS photometry and PM measurements obtained by cross-correlating the GCS with data from the Two Micron All Sky Survey, the Isaac Newton Telescope and the Canada-France-Hawaii Telescope. Additionally, we have unearthed 73 new candidate brown dwarf (BD) members on the basis of five-band UKIDSS photometry alone. We have identified 23 substellar multiple system candidates out of 63 candidate BDs from the (Y - K, Y) and (J - K, J) colour-magnitude diagrams, yielding a binary frequency of 28-44 per cent in the 0.075-0.030 Msolar mass range. Our estimate is three times larger than the binary fractions reported from high-resolution imaging surveys of field ultracool dwarfs and Pleiades BDs. However, it is marginally consistent with our earlier `peculiar' photometric binary fraction of 50 +/- 10 per cent presented by Pinfield et al., in good agreement with the 32-45 per cent binary fraction derived from the recent Monte Carlo simulations of Maxted & Jeffries and compatible with the 26 +/- 10 per cent frequency recently estimated by Basri & Reiners. A tentative estimate of the mass ratios from photometry alone seems to support the hypothesis that binary BDs tend to reside in near equal-mass ratio systems. In addition, the recovery of four Pleiades members targeted by high-resolution imaging surveys for multiplicity studies suggests that half of the binary candidates may have separations below the resolution limit of the Hubble Space Telescope or current adaptive optics facilities at the distance of the Pleiades (a ~7 au). Finally, we have derived luminosity and mass functions from the sample of photometric candidates with membership probabilities. The mass function is well modelled by a lognormal peaking at 0.24Msolar and is in agreement with previous studies in the Pleiades. Based on observations made with the United Kingdom Infrared Telescope, operated by the Joint Astronomy Centre on behalf of the UK Particle Physics and Astronomy Research Council. E-mail: nlodieu@iac.es

  18. Source apportionment of PM10 and PM2.5 in major urban Greek agglomerations using a hybrid source-receptor modeling process.

    PubMed

    Argyropoulos, G; Samara, C; Diapouli, E; Eleftheriadis, K; Papaoikonomou, K; Kungolos, A

    2017-12-01

    A hybrid source-receptor modeling process was assembled, to apportion and infer source locations of PM 10 and PM 2.5 in three heavily-impacted urban areas of Greece, during the warm period of 2011, and the cold period of 2012. The assembled process involved application of an advanced computational procedure, the so-called Robotic Chemical Mass Balance (RCMB) model. Source locations were inferred using two well-established probability functions: (a) the Conditional Probability Function (CPF), to correlate the output of RCMB with local wind directional data, and (b) the Potential Source Contribution Function (PSCF), to correlate the output of RCMB with 72h air-mass back-trajectories, arriving at the receptor sites, during sampling. Regarding CPF, a higher-level conditional probability function was defined as well, from the common locus of CPF sectors derived for neighboring receptor sites. With respect to PSCF, a non-parametric bootstrapping method was applied to discriminate the statistically significant values. RCMB modeling showed that resuspended dust is actually one of the main barriers for attaining the European Union (EU) limit values in Mediterranean urban agglomerations, where the drier climate favors build-up. The shift in the energy mix of Greece (caused by the economic recession) was also evidenced, since biomass burning was found to contribute more significantly to the sampling sites belonging to the coldest climatic zone, particularly during the cold period. The CPF analysis showed that short-range transport of anthropogenic emissions from urban traffic to urban background sites was very likely to have occurred, within all the examined urban agglomerations. The PSCF analysis confirmed that long-range transport of primary and/or secondary aerosols may indeed be possible, even from distances over 1000km away from study areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Estimated Probability of a Cervical Spine Injury During an ISS Mission

    NASA Technical Reports Server (NTRS)

    Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.

    2013-01-01

    Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a Monte Carlo wrapper (MATLAB) used to integrate the components of the module. Results: The probability of generating an AIS1 soft tissue neck injury from the extension/flexion motion induced by a low-velocity blunt impact to the superior posterior thorax was fitted with a lognormal PDF with mean 0.26409, standard deviation 0.11353, standard error of mean 0.00114, and 95% confidence interval [0.26186, 0.26631]. Combining the probability of an AIS1 injury with the probability of IES occurrence was fitted with a Johnson SI PDF with mean 0.02772, standard deviation 0.02012, standard error of mean 0.00020, and 95% confidence interval [0.02733, 0.02812]. The input factor sensitivity analysis in descending order was IES incidence rate, ITF regression coefficient 1, impactor initial velocity, ITF regression coefficient 2, and all others (equipment mass, crew 1 body mass, crew 2 body mass) insignificant. Verification and Validation (V&V): The IMM V&V, based upon NASA STD 7009, was implemented which included an assessment of the data sets used to build CSIM. The documentation maintained includes source code comments and a technical report. The software code and documentation is under Subversion configuration management. Kinematic validation was performed by comparing the biomechanical model output to established corridors.

  20. A fully traits-based approach to modeling global vegetation distribution.

    PubMed

    van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M

    2014-09-23

    Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.

  1. Probability of lensing magnification by cosmologically distributed galaxies

    NASA Technical Reports Server (NTRS)

    Pei, Yichuan C.

    1993-01-01

    We present the analytical formulae for computing the magnification probability caused by cosmologically distributed galaxies. The galaxies are assumed to be singular, truncated-isothermal spheres without both evolution and clustering in redshift. We find that, for a fixed total mass, extended galaxies produce a broader shape in the magnification probability distribution and hence are less efficient as gravitational lenses than compact galaxies. The high-magnification tail caused by large galaxies is well approximated by an A exp -3 form, while the tail by small galaxies is slightly shallower. The mean magnification as a function of redshift is, however, found to be independent of the size of the lensing galaxies. In terms of the flux conservation, our formulae for the isothermal galaxy model predict a mean magnification to within a few percent with the Dyer-Roeder model of a clumpy universe.

  2. Survival and breeding advantages of larger Black Brant (Branta bernicla nigricans) goslings: Within- and among-cohort variation

    USGS Publications Warehouse

    Sedinger, J.S.; Chelgren, N.D.

    2007-01-01

    We examined the relationship between mass late in the first summer and survival and return to the natal breeding colony for 12 cohorts (1986-1997) of female Black Brant (Branta bernicla nigricans). We used Cormack-Jolly-Seber methods and the program MARK to analyze capture-recapture data. Models included two kinds of residuals from regressions of mass on days after peak of hatch when goslings were measured; one based on the entire sample (12 cohorts) and the other based only on individuals in the same cohort. Some models contained date of peak of hatch (a group covariate related to lateness of nesting in that year) and mean cohort residual mass. Finally, models allowed survival to vary among cohorts. The best model of encounter probability included an effect of residual mass on encounter probability and allowed encounter probability to vary among age classes and across years. All competitive models contained an effect of one of the estimates of residual mass; relatively larger goslings survived their first year at higher rates. Goslings in cohorts from later years in the analysis tended to have lower first-year survival, after controlling for residual mass, which reflected the generally smaller mean masses for these cohorts but was potentially also a result of population-density effects additional to those on growth. Variation among cohorts in mean mass accounted for 56% of variation among cohorts in first-year survival. Encounter probabilities, which were correlated with breeding probability, increased with relative mass, which suggests that larger goslings not only survived at higher rates but also bred at higher rates. Although our findings support the well-established linkage between gosling mass and fitness, they suggest that additional environmental factors also influence first-year survival.

  3. Massive Star Cluster Populations in Irregular Galaxies as Probable Younger Counterparts of Old Metal-rich Globular Cluster Populations in Spheroids

    NASA Astrophysics Data System (ADS)

    Kravtsov, V. V.

    2006-09-01

    Peak metallicities of metal-rich populations of globular clusters (MRGCs) belonging to early-type galaxies and spheroidal subsystems of spiral galaxies (spheroids) of different mass fall within the somewhat conservative -0.7<=[Fe/H]<=-0.3 range. Indeed, if possible age effects are taken into account, this metallicity range might become smaller. Irregular galaxies such as the Large Magellanic Cloud (LMC), with longer timescales of formation and lower star formation (SF) efficiency, do not contain old MRGCs with [Fe/H]>-1.0, but they are observed to form populations of young/intermediate-age massive star clusters (MSCs) with masses exceeding 104 Msolar. Their formation is widely believed to be an accidental process fully dependent on external factors. From the analysis of available data on the populations and their hosts, including intermediate-age populous star clusters in the LMC, we find that their most probable mean metallicities fall within -0.7<=[Fe/H]<=-0.3, as the peak metallicities of MRGCs do, irrespective of signs of interaction. Moreover, both the disk giant metallicity distribution function (MDF) in the LMC and the MDFs for old giants in the halos of massive spheroids exhibit a significant increase toward [Fe/H]~-0.5. That is in agreement with a correlation found between SF activity in galaxies and their metallicity. The formation of both the old MRGCs in spheroids and MSC populations in irregular galaxies probably occurs at approximately the same stage of the host galaxies' chemical evolution and is related to the essentially increased SF activity in the hosts around the same metallicity that is achieved very early in massive spheroids, later in lower mass spheroids, and much later in irregular galaxies. Changes in the interstellar dust, particularly in elemental abundances in dust grains and in the mass distribution function of the grains, may be among the factors regulating star and MSC formation activity in galaxies. Strong interactions and mergers affecting the MSC formation presumably play an additional role, although they can substantially intensify the internally regulated MSC formation process. Several implications of our suggestions are briefly discussed.

  4. The baryonic mass function of galaxies.

    PubMed

    Read, J I; Trentham, Neil

    2005-12-15

    In the Big Bang about 5% of the mass that was created was in the form of normal baryonic matter (neutrons and protons). Of this about 10% ended up in galaxies in the form of stars or of gas (that can be in molecules, can be atomic, or can be ionized). In this work, we measure the baryonic mass function of galaxies, which describes how the baryonic mass is distributed within galaxies of different types (e.g. spiral or elliptical) and of different sizes. This can provide useful constraints on our current cosmology, convolved with our understanding of how galaxies form. This work relies on various large astronomical surveys, e.g. the optical Sloan Digital Sky Survey (to observe stars) and the HIPASS radio survey (to observe atomic gas). We then perform an integral over our mass function to determine the cosmological density of baryons in galaxies: Omega(b,gal)=0.0035. Most of these baryons are in stars: Omega(*)=0.0028. Only about 20% are in gas. The error on the quantities, as determined from the range obtained between different methods, is ca 10%; systematic errors may be much larger. Most (ca 90%) of the baryons in the Universe are not in galaxies. They probably exist in a warm/hot intergalactic medium. Searching for direct observational evidence and deeper theoretical understanding for this will form one of the major challenges for astronomy in the next decade.

  5. Orders of Magnitude Extension of the Effective Dynamic Range of TDC-Based TOFMS Data Through Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ipsen, Andreas; Ebbels, Timothy M. D.

    2014-10-01

    In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.

  6. Exercise, dietary obesity, and growth in the rat

    NASA Technical Reports Server (NTRS)

    Pitts, G. C.; Bull, L. S.

    1977-01-01

    Experiments were conducted on weanling male rats 35 days old and weighing about 100 g to determine how endurance-type exercise and high-fat diet administered during growth influence body mass and composition. The animals were divided into four weight-matched groups of 25 animals each: group I - high-fat diet, exercised; group II - chow, exercised; group III - high-fat diet, sedentary; and group IV - chow, sedentary. During growth, masses of water, muscle and skin increased as functions of body size; bone as a function of age; and heart, liver, gut, testes, and CNS were affected by combinations of size, age, activity, and diet. Major conclusions are that growth in body size is expressed more precisely with fat-free body mass (FFBM), that late rectilinear growth is probably attributable to fat accretion, and that the observed influences on FFBM of exercise and high-fat diet are obtained only if the regimen is started at or before age 5-7 weeks.

  7. Characterization of some bacteriocins produced by lactic acid bacteria isolated from fermented foods.

    PubMed

    Grosu-Tudor, Silvia-Simona; Stancu, Mihaela-Marilena; Pelinescu, Diana; Zamfir, Medana

    2014-09-01

    Lactic acid bacteria (LAB) isolated from different sources (dairy products, fruits, fresh and fermented vegetables, fermented cereals) were screened for antimicrobial activity against other bacteria, including potential pathogens and food spoiling bacteria. Six strains have been shown to produce bacteriocins: Lactococcus lactis 19.3, Lactobacillus plantarum 26.1, Enterococcus durans 41.2, isolated from dairy products and Lactobacillus amylolyticus P40 and P50, and Lactobacillus oris P49, isolated from bors. Among the six bacteriocins, there were both heat stable, low molecular mass polypeptides, with a broad inhibitory spectrum, probably belonging to class II bacteriocins, and heat labile, high molecular mass proteins, with a very narrow inhibitory spectrum, most probably belonging to class III bacteriocins. A synergistic effect of some bacteriocins mixtures was observed. We can conclude that fermented foods are still important sources of new functional LAB. Among the six characterized bacteriocins, there might be some novel compounds with interesting features. Moreover, the bacteriocin-producing strains isolated in our study may find applications as protective cultures.

  8. Incorporating sequence information into the scoring function: a hidden Markov model for improved peptide identification.

    PubMed

    Khatun, Jainab; Hamlett, Eric; Giddings, Morgan C

    2008-03-01

    The identification of peptides by tandem mass spectrometry (MS/MS) is a central method of proteomics research, but due to the complexity of MS/MS data and the large databases searched, the accuracy of peptide identification algorithms remains limited. To improve the accuracy of identification we applied a machine-learning approach using a hidden Markov model (HMM) to capture the complex and often subtle links between a peptide sequence and its MS/MS spectrum. Our model, HMM_Score, represents ion types as HMM states and calculates the maximum joint probability for a peptide/spectrum pair using emission probabilities from three factors: the amino acids adjacent to each fragmentation site, the mass dependence of ion types and the intensity dependence of ion types. The Viterbi algorithm is used to calculate the most probable assignment between ion types in a spectrum and a peptide sequence, then a correction factor is added to account for the propensity of the model to favor longer peptides. An expectation value is calculated based on the model score to assess the significance of each peptide/spectrum match. We trained and tested HMM_Score on three data sets generated by two different mass spectrometer types. For a reference data set recently reported in the literature and validated using seven identification algorithms, HMM_Score produced 43% more positive identification results at a 1% false positive rate than the best of two other commonly used algorithms, Mascot and X!Tandem. HMM_Score is a highly accurate platform for peptide identification that works well for a variety of mass spectrometer and biological sample types. The program is freely available on ProteomeCommons via an OpenSource license. See http://bioinfo.unc.edu/downloads/ for the download link.

  9. PDF approach for compressible turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.

    1993-01-01

    The objective of the present work is to develop a probability density function (pdf) turbulence model for compressible reacting flows for use with a CFD flow solver. The probability density function of the species mass fraction and enthalpy are obtained by solving a pdf evolution equation using a Monte Carlo scheme. The pdf solution procedure is coupled with a compressible CFD flow solver which provides the velocity and pressure fields. A modeled pdf equation for compressible flows, capable of capturing shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed, and an averaging procedure is developed to provide smooth Monte-Carlo solutions to ensure convergence. Two supersonic diffusion flames are studied using the proposed pdf model and the results are compared with experimental data; marked improvements over CFD solutions without pdf are observed. Preliminary applications of pdf to 3D flows are also reported.

  10. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  11. The star-forming history of the young cluster NGC 2264

    NASA Technical Reports Server (NTRS)

    Adams, M. T.; Strom, K. M.; Strom, S. E.

    1983-01-01

    UBVRI H-alpha photographic photometry was obtained for a sample of low-mass stars in the young open cluster NGC 2264 in order to investigate the star-forming history of this region. A theoretical H-R diagram was constructed for the sample of probable cluster members. Isochrones and evolutionary tracks were adopted from Cohen and Kuhi (1979). Evidence for a significant age spread in the cluster was found amounting to over ten million yr. In addition, the derived star formation rate as a function of stellar mass suggests that the principal star-forming mass range in NGC 2264 has proceeded sequentially in time from the lowest to the highest masses. The low-mass cluster stars were the first cluster members to form in significant numbers, although their present birth rate is much lower now than it was about ten million yr ago. The star-formation rate has risen to a peak at successively higher masses and then declined.

  12. Stacking dependence of carrier transport properties in multilayered black phosphorous

    NASA Astrophysics Data System (ADS)

    Sengupta, A.; Audiffred, M.; Heine, T.; Niehaus, T. A.

    2016-02-01

    We present the effect of different stacking orders on carrier transport properties of multi-layer black phosphorous. We consider three different stacking orders AAA, ABA and ACA, with increasing number of layers (from 2 to 6 layers). We employ a hierarchical approach in density functional theory (DFT), with structural simulations performed with generalized gradient approximation (GGA) and the bandstructure, carrier effective masses and optical properties evaluated with the meta-generalized gradient approximation (MGGA). The carrier transmission in the various black phosphorous sheets was carried out with the non-equilibrium green’s function (NEGF) approach. The results show that ACA stacking has the highest electron and hole transmission probabilities. The results show tunability for a wide range of band-gaps, carrier effective masses and transmission with a great promise for lattice engineering (stacking order and layers) in black phosphorous.

  13. Primordial black holes and uncertainties in the choice of the window function

    NASA Astrophysics Data System (ADS)

    Ando, Kenta; Inomata, Keisuke; Kawasaki, Masahiro

    2018-05-01

    Primordial black holes (PBHs) can be produced by the perturbations that exit the horizon during the inflationary phase. While inflation models predict the power spectrum of the perturbations in Fourier space, the PBH abundance depends on the probability distribution function of density perturbations in real space. To estimate the PBH abundance in a given inflation model, we must relate the power spectrum in Fourier space to the probability density function in real space by coarse graining the perturbations with a window function. However, there are uncertainties on what window function should be used, which could change the relation between the PBH abundance and the power spectrum. This is particularly important in considering PBHs with mass 30 M⊙, which account for the LIGO events because the required power spectrum is severely constrained by the observations. In this paper, we investigate how large an influence the uncertainties on the choice of a window function has over the power spectrum required for LIGO PBHs. As a result, it is found that the uncertainties significantly affect the prediction for the stochastic gravitational waves induced by the second-order effect of the perturbations. In particular, the pulsar timing array constraints on the produced gravitational waves could disappear for the real-space top-hat window function.

  14. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  15. BANYAN. V. A SYSTEMATIC ALL-SKY SURVEY FOR NEW VERY LATE-TYPE LOW-MASS STARS AND BROWN DWARFS IN NEARBY YOUNG MOVING GROUPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gagné, Jonathan; Lafrenière, David; Doyon, René

    2015-01-10

    We present the BANYAN All-Sky Survey (BASS) catalog, consisting of 228 new late-type (M4-L6) candidate members of nearby young moving groups (YMGs) with an expected false-positive rate of ∼13%. This sample includes 79 new candidate young brown dwarfs and 22 planetary-mass objects. These candidates were identified through the first systematic all-sky survey for late-type low-mass stars and brown dwarfs in YMGs. We cross-matched the Two Micron All Sky Survey and AllWISE catalogs outside of the galactic plane to build a sample of 98,970 potential ≥M5 dwarfs in the solar neighborhood and calculated their proper motions with typical precisions of 5-15more » mas yr{sup –1}. We selected highly probable candidate members of several YMGs from this sample using the Bayesian Analysis for Nearby Young AssociatioNs II tool (BANYAN II). We used the most probable statistical distances inferred from BANYAN II to estimate the spectral type and mass of these candidate YMG members. We used this unique sample to show tentative signs of mass segregation in the AB Doradus moving group and the Tucana-Horologium and Columba associations. The BASS sample has already been successful in identifying several new young brown dwarfs in earlier publications, and will be of great interest in studying the initial mass function of YMGs and for the search of exoplanets by direct imaging; the input sample of potential close-by ≥M5 dwarfs will be useful to study the kinematics of low-mass stars and brown dwarfs and search for new proper motion pairs.« less

  16. X{sub max}{sup μ} vs. N{sup μ} from extensive air showers as estimator for the mass of primary UHECR's. Application for the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arsene, Nicusor; Sima, Octavian

    2015-02-24

    We study the possibility of primary mass estimation for Ultra High Energy Cosmic Rays (UHECR's) using the X{sub max}{sup μ} (the height where the number of muons produced on the core of Extensive Air Showers (EAS) is maximum) and the number N{sup μ} of muons detected on ground. We use the 2D distribution - X{sub max}{sup μ} against N{sup μ} in order to find its sensitivity to the mass of the primary particle. For that, we construct a 2D Probability Function Prob(p,Fe | X{sub max}{sup μ}, N{sup μ}) which estimates the probability that a certain point from the plane (X{submore » max}{sup μ}, N{sup μ}) corresponds to a shower induced by a proton, respectively an iron nucleus. To test the procedure, we analyze a set of simulated EAS induced by protons and iron nuclei at energies of 10{sup 19}eV and 20° zenith angle with CORSIKA. Using the Bayesian approach and taking into account the geometry of the infill detectors from the Pierre Auger Observatory, we observe an improvement in the accuracy of the primary mass reconstruction in comparison with the results obtained using only the X{sub max}{sup μ} distributions.« less

  17. Identification of Thyroid Receptor Ant/Agonists in Water Sources Using Mass Balance Analysis and Monte Carlo Simulation

    PubMed Central

    Shi, Wei; Wei, Si; Hu, Xin-xin; Hu, Guan-jiu; Chen, Cu-lan; Wang, Xin-ru; Giesy, John P.; Yu, Hong-xia

    2013-01-01

    Some synthetic chemicals, which have been shown to disrupt thyroid hormone (TH) function, have been detected in surface waters and people have the potential to be exposed through water-drinking. Here, the presence of thyroid-active chemicals and their toxic potential in drinking water sources in Yangtze River Delta were investigated by use of instrumental analysis combined with cell-based reporter gene assay. A novel approach was developed to use Monte Carlo simulation, for evaluation of the potential risks of measured concentrations of TH agonists and antagonists and to determine the major contributors to observed thyroid receptor (TR) antagonist potency. None of the extracts exhibited TR agonist potency, while 12 of 14 water samples exhibited TR antagonistic potency. The most probable observed antagonist equivalents ranged from 1.4 to 5.6 µg di-n-butyl phthalate (DNBP)/L, which posed potential risk in water sources. Based on Monte Carlo simulation related mass balance analysis, DNBP accounted for 64.4% for the entire observed antagonist toxic unit in water sources, while diisobutyl phthalate (DIBP), di-n-octyl phthalate (DNOP) and di-2-ethylhexyl phthalate (DEHP) also contributed. The most probable observed equivalent and most probable relative potency (REP) derived from Monte Carlo simulation is useful for potency comparison and responsible chemicals screening. PMID:24204563

  18. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Bremer, J. E.; Harter, T.

    2012-08-01

    Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).

  19. Star Cluster Properties in Two LEGUS Galaxies Computed with Stochastic Stellar Population Synthesis Models

    NASA Astrophysics Data System (ADS)

    Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik

    2015-10-01

    We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.

  20. Halohydrination of epoxy resins using sodium halides as cationizing agents in MALDI-MS and DIOS-MS.

    PubMed

    Watanabe, Takehiro; Kawasaki, Hideya; Kimoto, Takashi; Arakawa, Ryuichi

    2008-12-01

    Halohydrination of epoxy resins using sodium halides as cationizing agents in matrix-assisted laser desorption/ionization (MALDI) and desorption ionization on porous silicon mass spectrometry (DIOS-MS) were investigated. Different mass spectra were observed when NaClO(4) and NaI were used as the cationizing agents at the highest concentration of 10.0 mM, which is much higher than that normally used in MALDI-MS. MALDI mass spectra of epoxy resins using NaI revealed iodohydrination to occur as epoxy functions of the polymers. The halohydrination also occurred using NaBr, but not NaCl, due to the differences in their nucleophilicities. On the basis of the results of experiments using deuterated CD(3)OD as the solvent, the hydrogen atom source was probably ambient water or residual solvent, rather than being derived from matrices. Halohydrination also occurred with DIOS-MS in which no organic matrix was used; in addition, reduction of epoxy functions was observed with DIOS. NaI is a useful cationizing agent for changing the chemical form of epoxy resins due to iodohydrination and, thus, for identifying the presence of epoxy functions. Copyright (c) 2008 John Wiley & Sons, Ltd.

  1. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.

    2013-01-10

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlomore » sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF slope recovery in this paper are lower limits, as we do not explicitly consider all possible sources of uncertainty, including dynamical effects (e.g., mass segregation), unresolved binaries, and non-coeval populations. We briefly discuss how each of these effects can be incorporated into extensions of the present framework. Finally, we emphasize that the technique and lessons learned are applicable to more general problems involving power-law fitting.« less

  2. Brick tunnel randomization and the momentum of the probability mass.

    PubMed

    Kuznetsova, Olga M

    2015-12-30

    The allocation space of an unequal-allocation permuted block randomization can be quite wide. The development of unequal-allocation procedures with a narrower allocation space, however, is complicated by the need to preserve the unconditional allocation ratio at every step (the allocation ratio preserving (ARP) property). When the allocation paths are depicted on the K-dimensional unitary grid, where allocation to the l-th treatment is represented by a step along the l-th axis, l = 1 to K, the ARP property can be expressed in terms of the center of the probability mass after i allocations. Specifically, for an ARP allocation procedure that randomizes subjects to K treatment groups in w1 :⋯:wK ratio, w1 +⋯+wK =1, the coordinates of the center of the mass are (w1 i,…,wK i). In this paper, the momentum with respect to the center of the probability mass (expected imbalance in treatment assignments) is used to compare ARP procedures in how closely they approximate the target allocation ratio. It is shown that the two-arm and three-arm brick tunnel randomizations (BTR) are the ARP allocation procedures with the tightest allocation space among all allocation procedures with the same allocation ratio; the two-arm BTR is the minimum-momentum two-arm ARP allocation procedure. Resident probabilities of two-arm and three-arm BTR are analytically derived from the coordinates of the center of the probability mass; the existence of the respective transition probabilities is proven. Probability of deterministic assignments with BTR is found generally acceptable. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  3. The Initial Mass Function of the Arches Cluster

    NASA Astrophysics Data System (ADS)

    Hosek, Matthew; Lu, Jessica; Anderson, Jay; Ghez, Andrea; Morris, Mark; Do, Tuan; Clarkson, William; Albers, Saundra; Weisz, Daniel

    2018-01-01

    The Arches star cluster is only 26 pc (in projection) from Sgr A*, the supermassive black hole at the Galactic Center. This young massive cluster allows us to examine the impact of the extreme Galactic Center environment on the stellar Initial Mass Function (IMF). However, measuring the IMF of the Arches is challenging due to the highly variable extinction along the line of sight, which makes it difficult to separate cluster members from the field stars. We use high-precision proper motion and photometric measurements obtained with the Hubble Space Telescope to calculate cluster membership probabilities for stars down to ~2 M_sun out to the outskirts of the cluster (3 pc). In addition, we measure the effective temperatures of a small sample of cluster members in order to calibrate the mass-luminosity relationship using using Keck OSIRS K-band spectroscopy. We forward model these observations to simultaneously constrain the cluster IMF, age, distance, and extinction. We obtain an IMF that is shallower than what is observed locally, with a higher fraction of high-mass stars to low mass stars (i.e., “top-heavy”). We will compare the IMF of the Arches to similar clusters in the Galactic disk and quantify the effect of the GC environment on the star formation process.

  4. Towards an initial mass function for giant planets

    NASA Astrophysics Data System (ADS)

    Carrera, Daniel; Davies, Melvyn B.; Johansen, Anders

    2018-07-01

    The distribution of exoplanet masses is not primordial. After the initial stage of planet formation, gravitational interactions between planets can lead to the physical collision of two planets, or the ejection of one or more planets from the system. When this occurs, the remaining planets are typically left in more eccentric orbits. In this report we demonstrate how the present-day eccentricities of the observed exoplanet population can be used to reconstruct the initial mass function of exoplanets before the onset of dynamical instability. We developed a Bayesian framework that combines data from N-body simulations with present-day observations to compute a probability distribution for the mass of the planets that were ejected or collided in the past. Integrating across the exoplanet population, one can estimate the initial mass function of exoplanets. We find that the ejected planets are primarily sub-Saturn-type planets. While the present-day distribution appears to be bimodal, with peaks around ˜1MJ and ˜20M⊕, this bimodality does not seem to be primordial. Instead, planets around ˜60M⊕ appear to be preferentially removed by dynamical instabilities. Attempts to reproduce exoplanet populations using population synthesis codes should be mindful of the fact that the present population may have been depleted of sub-Saturn-mass planets. Future observations may reveal that young giant planets have a more continuous size distribution with lower eccentricities and more sub-Saturn-type planets. Lastly, there is a need for additional data and for more research on how the system architecture and multiplicity might alter our results.

  5. Toward an initial mass function for giant planets

    NASA Astrophysics Data System (ADS)

    Carrera, Daniel; Davies, Melvyn B.; Johansen, Anders

    2018-05-01

    The distribution of exoplanet masses is not primordial. After the initial stage of planet formation, gravitational interactions between planets can lead to the physical collision of two planets, or the ejection of one or more planets from the system. When this occurs, the remaining planets are typically left in more eccentric orbits. In this report we demonstrate how the present-day eccentricities of the observed exoplanet population can be used to reconstruct the initial mass function of exoplanets before the onset of dynamical instability. We developed a Bayesian framework that combines data from N-body simulations with present-day observations to compute a probability distribution for the mass of the planets that were ejected or collided in the past. Integrating across the exoplanet population, one can estimate the initial mass function of exoplanets. We find that the ejected planets are primarily sub-Saturn type planets. While the present-day distribution appears to be bimodal, with peaks around ˜1MJ and ˜20M⊕, this bimodality does not seem to be primordial. Instead, planets around ˜60M⊕ appear to be preferentially removed by dynamical instabilities. Attempts to reproduce exoplanet populations using population synthesis codes should be mindful of the fact that the present population may have been been depleted of sub-Saturn-mass planets. Future observations may reveal that young giant planets have a more continuous size distribution with lower eccentricities and more sub-Saturn type planets. Lastly, there is a need for additional data and for more research on how the system architecture and multiplicity might alter our results.

  6. Theoretical analysis of the influence of aerosol size distribution and physical activity on particle deposition pattern in human lungs.

    PubMed

    Voutilainen, Arto; Kaipio, Jari P; Pekkanen, Juha; Timonen, Kirsi L; Ruuskanen, Juhani

    2004-01-01

    A theoretical comparison of modeled particle depositions in the human respiratory tract was performed by taking into account different particle number and mass size distributions and physical activity in an urban environment. Urban-air data on particulate concentrations in the size range 10 nm-10 microm were used to estimate the hourly average particle number and mass size distribution functions. The functions were then combined with the deposition probability functions obtained from a computerized ICRP 66 deposition model of the International Commission on Radiological Protection to calculate the numbers and masses of particles deposited in five regions of the respiratory tract of a male adult. The man's physical activity and minute ventilation during the day were taken into account in the calculations. Two different mass and number size distributions of aerosol particles with equal (computed) <10 microm particle mass concentrations gave clearly different deposition patterns in the central and peripheral regions of the human respiratory tract. The deposited particle numbers and masses were much higher during the day (0700-1900) than during the night (1900-0700) because an increase in physical activity and ventilation were temporally associated with highly increased traffic-derived particles in urban outdoor air. In future analyses of the short-term associations between particulate air pollution and health, it would not only be important to take into account the outdoor-to-indoor penetration of different particle sizes and human time-activity patterns, but also actual lung deposition patterns and physical activity in significant microenvironments.

  7. Psychophysics of the probability weighting function

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki

    2011-03-01

    A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (0<α<1 and w(0)=1,w(1e)=1e,w(1)=1), which has extensively been studied in behavioral neuroeconomics. The present study utilizes psychophysical theory to derive Prelec's probability weighting function from psychophysical laws of perceived waiting time in probabilistic choices. Also, the relations between the parameters in the probability weighting function and the probability discounting function in behavioral psychology are derived. Future directions in the application of the psychophysical theory of the probability weighting function in econophysics and neuroeconomics are discussed.

  8. A critical analysis of high-redshift, massive, galaxy clusters. Part I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, Ben; Jimenez, Raul; Verde, Licia

    2012-02-01

    We critically investigate current statistical tests applied to high redshift clusters of galaxies in order to test the standard cosmological model and describe their range of validity. We carefully compare a sample of high-redshift, massive, galaxy clusters with realistic Poisson sample simulations of the theoretical mass function, which include the effect of Eddington bias. We compare the observations and simulations using the following statistical tests: the distributions of ensemble and individual existence probabilities (in the > M, > z sense), the redshift distributions, and the 2d Kolmogorov-Smirnov test. Using seemingly rare clusters from Hoyle et al. (2011), and Jee etmore » al. (2011) and assuming the same survey geometry as in Jee et al. (2011, which is less conservative than Hoyle et al. 2011), we find that the ( > M, > z) existence probabilities of all clusters are fully consistent with ΛCDM. However assuming the same survey geometry, we use the 2d K-S test probability to show that the observed clusters are not consistent with being the least probable clusters from simulations at > 95% confidence, and are also not consistent with being a random selection of clusters, which may be caused by the non-trivial selection function and survey geometry. Tension can be removed if we examine only a X-ray selected sub sample, with simulations performed assuming a modified survey geometry.« less

  9. Double-observer approach to estimating egg mass abundance of vernal pool breeding amphibians

    USGS Publications Warehouse

    Grant, E.H.C.; Jung, R.E.; Nichols, J.D.; Hines, J.E.

    2005-01-01

    Interest in seasonally flooded pools, and the status of associated amphibian populations, has initiated programs in the northeastern United States to document and monitor these habitats. Counting egg masses is an effective way to determine the population size of pool-breeding amphibians, such as wood frogs (Rana sylvatica) and spotted salamanders (Ambystoma maculatum). However, bias is associated with counts if egg masses are missed. Counts unadjusted for the proportion missed (i.e., without adjustment for detection probability) could lead to false assessments of population trends. We used a dependent double-observer method in 2002-2003 to estimate numbers of wood frog and spotted salamander egg masses at seasonal forest pools in 13 National Wildlife Refuges, 1 National Park, 1 National Seashore, and 1 State Park in the northeastern United States. We calculated detection probabilities for egg masses and examined whether detection probabilities varied by species, observers, pools, and in relation to pool characteristics (pool area, pool maximum depth, within-pool vegetation). For the 2 years, model selection indicated that no consistent set of variables explained the variation in data sets from individual Refuges and Parks. Because our results indicated that egg mass detection probabilities vary spatially and temporally, we conclude that it is essential to use estimation procedures, such as double-observer methods with egg mass surveys, to determine population sizes and trends of these species.

  10. DETERMINING TYPE Ia SUPERNOVA HOST GALAXY EXTINCTION PROBABILITIES AND A STATISTICAL APPROACH TO ESTIMATING THE ABSORPTION-TO-REDDENING RATIO R{sub V}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cikota, Aleksandar; Deustua, Susana; Marleau, Francine, E-mail: acikota@eso.org

    We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B – V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, R{sub V}, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B – V) with R{sub V} = 3.1 and investigate the color excess probabilities E(B – V) with projected radial galaxymore » center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa–Sap, Sab–Sbp, Sbc–Scp, Scd–Sdm, S0, and irregular galaxy classes as a function of R/R{sub 25}. We find that the largest expected reddening probabilities are in Sab–Sb and Sbc–Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio R{sub V} using color excess probability functions and find values of R{sub V} = 2.71 ± 1.58 for 21 SNe Ia observed in Sab–Sbp galaxies, and R{sub V} = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc–Scp galaxies.« less

  11. How enhanced molecular ions in Cold EI improve compound identification by the NIST library.

    PubMed

    Alon, Tal; Amirav, Aviv

    2015-12-15

    Library-based compound identification with electron ionization (EI) mass spectrometry (MS) is a well-established identification method which provides the names and structures of sample compounds up to the isomer level. The library (such as NIST) search algorithm compares different EI mass spectra in the library's database with the measured EI mass spectrum, assigning each of them a similarity score called 'Match' and an overall identification probability. Cold EI, electron ionization of vibrationally cold molecules in supersonic molecular beams, provides mass spectra with all the standard EI fragment ions combined with enhanced Molecular Ions and high-mass fragments. As a result, Cold EI mass spectra differ from those provided by standard EI and tend to yield lower matching scores. However, in most cases, library identification actually improves with Cold EI, as library identification probabilities for the correct library mass spectra increase, despite the lower matching factors. This research examined the way that enhanced molecular ion abundances affect library identification probability and the way that Cold EI mass spectra, which include enhanced molecular ions and high-mass fragment ions, typically improve library identification results. It involved several computer simulations, which incrementally modified the relative abundances of the various ions and analyzed the resulting mass spectra. The simulation results support previous measurements, showing that while enhanced molecular ion and high-mass fragment ions lower the matching factor of the correct library compound, the matching factors of the incorrect library candidates are lowered even more, resulting in a rise in the identification probability for the correct compound. This behavior which was previously observed by analyzing Cold EI mass spectra can be explained by the fact that high-mass ions, and especially the molecular ion, characterize a compound more than low-mass ions and therefore carries more weight in library search identification algorithms. These ions are uniquely abundant in Cold EI, which therefore enables enhanced compound characterization along with improved NIST library based identification. Copyright © 2015 John Wiley & Sons, Ltd.

  12. BINARY FORMATION MECHANISMS: CONSTRAINTS FROM THE COMPANION MASS RATIO DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reggiani, Maddalena M.; Meyer, Michael R., E-mail: reggiani@phys.ethz.ch

    2011-09-01

    We present a statistical comparison of the mass ratio distribution of companions, as observed in different multiplicity surveys, to the most recent estimate of the single-object mass function. The main goal of our analysis is to test whether or not the observed companion mass ratio distribution (CMRD) as a function of primary star mass and star formation environment is consistent with having been drawn from the field star initial mass function (IMF). We consider samples of companions for M dwarfs, solar-type stars, and intermediate-mass stars, both in the field as well as clusters or associations, and compare them with populationsmore » of binaries generated by random pairing from the assumed IMF for a fixed primary mass. With regard to the field we can reject the hypothesis that the CMRD was drawn from the IMF for different primary mass ranges: the observed CMRDs show a larger number of equal-mass systems than predicted by the IMF. This is in agreement with fragmentation theories of binary formation. For the open clusters {alpha} Persei and the Pleiades we also reject the IMF random-pairing hypothesis. Concerning young star-forming regions, currently we can rule out a connection between the CMRD and the field IMF in Taurus but not in Chamaeleon I. Larger and different samples are needed to better constrain the result as a function of the environment. We also consider other companion mass functions and we compare them with observations. Moreover the CMRD both in the field and clusters or associations appears to be independent of separation in the range covered by the observations. Combining therefore the CMRDs of M (1-2400 AU) and G (28-1590 AU) primaries in the field and intermediate-mass primary binaries in Sco OB2 (29-1612 AU) for mass ratios, q = M{sub 2}/M{sub 1}, from 0.2 to 1, we find that the best chi-square fit follows a power law dN/dq{proportional_to}q {sup {beta}}, with {beta} = -0.50 {+-} 0.29, consistent with previous results. Finally, we note that the Kolmogorov-Smirnov test gives a {approx}1% probability of the observed CMRD in the Pleiades and Taurus being consistent with that observed for solar-type primaries in the field over comparable primary mass range. This highlights the value of using CMRDs to understand which star formation events contribute most to the field.« less

  13. Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian

    2009-07-15

    Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less

  14. MAPPING THE SHORES OF THE BROWN DWARF DESERT. II. MULTIPLE STAR FORMATION IN TAURUS-AURIGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, Adam L.; Ireland, Michael J.; Martinache, Frantz

    2011-04-10

    We have conducted a high-resolution imaging study of the Taurus-Auriga star-forming region in order to characterize the primordial outcome of multiple star formation and the extent of the brown dwarf desert. Our survey identified 16 new binary companions to primary stars with masses of 0.25-2.5 M{sub sun}, raising the total number of binary pairs (including components of high-order multiples) with separations of 3-5000 AU to 90. We find that {approx}2/3-3/4 of all Taurus members are multiple systems of two or more stars, while the other {approx}1/4-1/3 appear to have formed as single stars; the distribution of high-order multiplicity suggests thatmore » fragmentation into a wide binary has no impact on the subsequent probability that either component will fragment again. The separation distribution for solar-type stars (0.7-2.5 M{sub sun}) is nearly log-flat over separations of 3-5000 AU, but lower-mass stars (0.25-0.7 M{sub sun}) show a paucity of binary companions with separations of {approx}>200 AU. Across this full mass range, companion masses are well described with a linear-flat function; all system mass ratios (q = M{sub B} /M{sub A} ) are equally probable, apparently including substellar companions. Our results are broadly consistent with the two expected modes of binary formation (free-fall fragmentation on large scales and disk fragmentation on small scales), but the distributions provide some clues as to the epochs at which the companions are likely to form.« less

  15. Study of a New CPM Pair 2Mass 14515781-1619034

    NASA Astrophysics Data System (ADS)

    Falcon, Israel Tejera

    2013-04-01

    In this paper I present the results of a study of 2Mass 14515781-1619034 as components of a common proper motion pair. Because PPMXL catalog's proper motion data not provide any information about secondary star, I deduced it independently, obtaining similar proper motions for both components. Halbwalchs' criteria indicates that this is a CPM ystem. The criterion of Francisco Rica, which is based on the compatibility of the kinematic function of the equatorial coordinates, indicates that this pair has a 99% probability of being a physical one (Rica, 2007). Also other important criteria (Dommanget, 1956, Peter Van De Kamp, 1961, Sinachopoulus, 1992, Close, 2003), indicate a physical system. With the absolute visual magnitude of both components, I obtained distance modulus 7.29 and 7.59, which put the components of the system at a distance of 287.1 and 329.6 parsecs. Taking into account errors in determining the magnitudes, this means that the probability that both components are situated at the same distance is 96%. I suggest that this pair be included in the WDS catalog.

  16. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models. [probability density function

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1992-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  17. Tailoring the operative approach for appendicitis to the patient: a prediction model from national surgical quality improvement program data.

    PubMed

    Senekjian, Lara; Nirula, Raminder

    2013-01-01

    Laparoscopic appendectomy (LA) is increasingly being performed in the United States, despite controversy about differences in infectious complication rates compared with open appendectomy (OA). Subpopulations exist in which infectious complication rates, both surgical site and organ space, differ with respect to LA compared with OA. All appendectomies in the National Surgical Quality Improvement Program database were analyzed with respect to surgical site infection (SSI) and organ space infection (OSI). Multivariate logistic regression analysis identified independent predictors of SSI or OSI. Probabilities of SSI or OSI were determined for subpopulations to identify when LA was superior to OA. From 2005 to 2009, there were 61,830 appendectomies performed (77.5% LA), of which 9,998 (16.2%) were complicated (58.7% LA). The risk of SSI was considerably lower for LA in both noncomplicated and complicated appendicitis. Across all ages, body mass index, renal function, and WBCs, LA was associated with a lower probability of SSI. The risk of OSI was considerably greater for LA in both noncomplicated and complicated appendicitis. In complicated appendicitis, OA was associated with a lower probability of OSI in patients with WBC >12 cells × 10(3)/μL. In noncomplicated appendicitis, OA was associated with a lower probability of OSI in patients with a body mass index <37.5 when compared with LA. Subpopulations exist in which OA is superior to LA in terms of OSI, however, SSI is consistently lower in LA patients. Copyright © 2013 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Postfledging survival of European starlings

    USGS Publications Warehouse

    Krementz, D.G.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    We tested the hypotheses that mass at fledging and fledge date within the breeding season affect postfledging survival in European Starlings (Sturnus vulgaris). Nestlings were weighed on day 18 after hatch and tagged with individually identifiable patagial tags. Fledge date was recorded. Marked fledglings were resighted during weekly two-day intensive observation periods for 9 weeks postfledging. Post-fledging survival and sighting probabilities were estimated for each of four groups (early or late fledging by heavy or light fledging mass). Body mass was related to post-fledging survival for birds that fledged early. Results were not clear-cut for relative fledge date, although there was weak evidence that this also influenced survival. Highest survival probability estimates occurred in the EARLY-HEAVY group, while the lowest survival estimate occurred in the LATE-LIGHT group. Sighting probabilities differed significantly among groups, emphasizing the need to estimate and compare survival using models which explicitly incorporate sighting probabilities.

  19. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  20. A Probabilistic Mass Estimation Algorithm for a Novel 7- Channel Capacitive Sample Verification Sensor

    NASA Technical Reports Server (NTRS)

    Wolf, Michael

    2012-01-01

    A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.

  1. Effects of high-intensity exercise and protein supplement on muscle mass in ADL dependent older people with and without malnutrition: a randomized controlled trial.

    PubMed

    Carlsson, M; Littbrand, H; Gustafson, Y; Lundin-Olsson, L; Lindelöf, N; Rosendahl, E; Håglin, L

    2011-08-01

    Loss of muscle mass is common among old people living in institutions but trials that evaluate interventions aimed at increasing the muscle mass are lacking. Objective, participants and intervention: This randomized controlled trial was performed to evaluate the effect of a high-intensity functional exercise program and a timed protein-enriched drink on muscle mass in 177 people aged 65 to 99 with severe physical or cognitive impairments, and living in residential care facilities. Three-month high-intensity exercise was compared with a control activity and a protein-enriched drink was compared with a placebo drink. A bioelectrical impedance spectrometer (BIS) was used in the evaluation. The amount of muscle mass and body weight (BW) were followed-up at three and six months and analyzed in a 2 x 2 factorial ANCOVA, using the intention to treat principle, and controlling for baseline values. At 3-month follow-up there were no differences in muscle mass and BW between the exercise and the control group or between the protein and the placebo group. No interaction effects were seen between the exercise and nutritional intervention. Long-term negative effects on muscle mass and BW was seen in the exercise group at the 6-month follow-up. A three month high-intensity functional exercise program did not increase the amount of muscle mass and an intake of a protein-enriched drink immediately after the exercise did not induce any additional effect on muscle mass. There were negative long-term effects on muscle mass and BW, indicating that it is probably necessary to compensate for an increased energy demand when offering a high-intensity exercise program.

  2. Deep search for companions to probable young brown dwarfs. VLT/NACO adaptive optics imaging using IR wavefront sensing

    NASA Astrophysics Data System (ADS)

    Chauvin, G.; Faherty, J.; Boccaletti, A.; Cruz, K.; Lagrange, A.-M.; Zuckerman, B.; Bessell, M. S.; Beuzit, J.-L.; Bonnefoy, M.; Dumas, C.; Lowrance, P.; Mouillet, D.; Song, I.

    2012-12-01

    Aims: We have obtained high contrast images of four nearby, faint, and very low mass objects 2MASS J04351455-1414468, SDSS J044337.61+000205.1, 2MASS J06085283-2753583 and 2MASS J06524851-5741376 (hereafter 2MASS0435-14, SDSS0443+00, 2MASS0608-27 and 2MASS0652-57), identified in the field as probable isolated young brown dwarfs. Our goal was to search for binary companions down to the planetary mass regime. Methods: We used the NAOS-CONICA adaptive optics instrument (NACO) and its unique capability to sense the wavefront in the near-infrared to acquire sharp images of the four systems in Ks, with a field of view of 28'' × 28''. Additional J and L' imaging and follow-up observations at a second epoch were obtained for 2MASS0652-57. Results: With a typical contrast ΔKs = 4.0-7.0 mag, our observations are sensitive down to the planetary mass regime considering a minimum age of 10 to 120 Myr for these systems. No additional point sources are detected in the environment of 2MASS0435-14, SDSS0443+00 and 2MASS0608-27 between 0.1-12'' (i.e. about 2 to 250 AU at 20 pc). 2MASS0652-57 is resolved as a ~230 mas binary. Follow-up observations reject a background contaminate, resolve the orbital motion of the pair, and confirm with high confidence that the system is physically bound. The J, Ks and L' photometry suggest a q ~ 0.7-0.8 mass ratio binary with a probable semi-major axis of 5-6 AU. Among the four systems, 2MASS0652-57 is probably the less constrained in terms of age determination. Further analysis would be necessary to confirm its youth. It would then be interesting to determine its orbital and physical properties to derive the system's dynamical mass and to test evolutionary model predictions. Based on observations collected at the European Southern Observatory, Chile (ESO programmes 076.C-0554(A), 076.C-0554(B) and 085.C-0257(A).

  3. The Panchromatic Hubble Andromeda Treasury. IV. A Probabilistic Approach to Inferring the High-mass Stellar Initial Mass Function and Other Power-law Functions

    NASA Astrophysics Data System (ADS)

    Weisz, Daniel R.; Fouesneau, Morgan; Hogg, David W.; Rix, Hans-Walter; Dolphin, Andrew E.; Dalcanton, Julianne J.; Foreman-Mackey, Daniel T.; Lang, Dustin; Johnson, L. Clifton; Beerman, Lori C.; Bell, Eric F.; Gordon, Karl D.; Gouliermis, Dimitrios; Kalirai, Jason S.; Skillman, Evan D.; Williams, Benjamin F.

    2013-01-01

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M >~ 1 M ⊙). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ~3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield langαrang = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF slope recovery in this paper are lower limits, as we do not explicitly consider all possible sources of uncertainty, including dynamical effects (e.g., mass segregation), unresolved binaries, and non-coeval populations. We briefly discuss how each of these effects can be incorporated into extensions of the present framework. Finally, we emphasize that the technique and lessons learned are applicable to more general problems involving power-law fitting. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  4. The paradox of the modern mass media: probably the major source of social cohesion in liberal democracies, even though its content is often socially divisive.

    PubMed

    Charlton, Bruce G

    2006-01-01

    The modern mass media (MM) is often regarded a mixture between a trivial waste of time and resources, and a dangerously subversive system tending to promote social division and community breakdown. But these negative evaluations are difficult to square with the fact that those countries with the largest mass media include the most modernized and powerful nations. It seems more plausible that the MM is serving some useful - perhaps vital - function. I suggest that modern mass media function as the main source of social cohesion in liberal democracies. The paradox is that this cohesive function is sustained in a context of frequently divisive media content. This media function evolved because modern MM produce an excess of media communications in a context of consumer choice which generates competition for public attention both within- and between-media. Competition has led the media to become increasingly specialized at gaining and retaining public attention. Social cohesion is the consequence of the mass media continually drawing public attention to itself, and to the extremely large, internally complex and interconnected nature of the MM system. The means by which attention is attracted are almost arbitrary, encompassing both novelty and familiarity and evoking a wide range of emotions both positive and negative. Driven to seek competitive advantage, modern mass media produce a wide range of material to cater to a vast range of interests; thereby engaging a great variety of individuals and social groupings. The consequence is that media content is typically self-contradictory and includes content which is offensive and potentially divisive; since what grabs the interest of some may offend or repel others. For instance, young men must be socially engaged, since they are potentially the most violent social group, yet the interests of young men include material that the majority of the population would find excessively aggressive, disrespectful, subversive or sexual. If the mass media is effectively to perform its crucial function of enabling social cohesion among a diverse and differentiated population, then modern liberal democracies need a broad margin of toleration and a widespread psychological capacity to endure dissent and disagreement.

  5. Learning the dynamics of objects by optimal functional interpolation.

    PubMed

    Ahn, Jong-Hoon; Kim, In Young

    2012-09-01

    Many areas of science and engineering rely on functional data and their numerical analysis. The need to analyze time-varying functional data raises the general problem of interpolation, that is, how to learn a smooth time evolution from a finite number of observations. Here, we introduce optimal functional interpolation (OFI), a numerical algorithm that interpolates functional data over time. Unlike the usual interpolation or learning algorithms, the OFI algorithm obeys the continuity equation, which describes the transport of some types of conserved quantities, and its implementation shows smooth, continuous flows of quantities. Without the need to take into account equations of motion such as the Navier-Stokes equation or the diffusion equation, OFI is capable of learning the dynamics of objects such as those represented by mass, image intensity, particle concentration, heat, spectral density, and probability density.

  6. Do key dimensions of seed and seedling functional trait variation capture variation in recruitment probability?

    PubMed

    Larson, Julie E; Sheley, Roger L; Hardegree, Stuart P; Doescher, Paul S; James, Jeremy J

    2016-05-01

    Seedling recruitment is a critical driver of population dynamics and community assembly, yet we know little about functional traits that define different recruitment strategies. For the first time, we examined whether trait relatedness across germination and seedling stages allows the identification of general recruitment strategies which share core functional attributes and also correspond to recruitment outcomes in applied settings. We measured six seed and eight seedling traits (lab- and field-collected, respectively) for 47 varieties of dryland grasses and used principal component analysis (PCA) and cluster analysis to identify major dimensions of trait variation and to isolate trait-based recruitment groups, respectively. PCA highlighted some links between seed and seedling traits, suggesting that relative growth rate and root elongation rate are simultaneously but independently associated with seed mass and initial root mass (first axis), and with leaf dry matter content, specific leaf area, coleoptile tissue density and germination rate (second axis). Third and fourth axes captured separate tradeoffs between hydrothermal time and base water potential for germination, and between specific root length and root mass ratio, respectively. Cluster analysis separated six recruitment types along dimensions of germination and growth rates, but classifications did not correspond to patterns of germination, emergence or recruitment in the field under either of two watering treatments. Thus, while we have begun to identify major threads of functional variation across seed and seedling stages, our understanding of how this variation influences demographic processes-particularly germination and emergence-remains a key gap in functional ecology.

  7. IN VITRO QUANTIFICATION OF THE SIZE DISTRIBUTION OF INTRASACCULAR VOIDS LEFT AFTER ENDOVASCULAR COILING OF CEREBRAL ANEURYSMS.

    PubMed

    Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B

    2013-03-01

    Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.

  8. Evidence for a mass-dependent AGN Eddington ratio distribution via the flat relationship between SFR and AGN luminosity

    NASA Astrophysics Data System (ADS)

    Bernhard, E.; Mullaney, J. R.; Aird, J.; Hickox, R. C.; Jones, M. L.; Stanley, F.; Grimmett, L. P.; Daddi, E.

    2018-05-01

    The lack of a strong correlation between AGN X-ray luminosity (LX; a proxy for AGN power) and the star formation rate (SFR) of their host galaxies has recently been attributed to stochastic AGN variability. Studies using population synthesis models have incorporated this by assuming a broad, universal (i.e. does not depend on the host galaxy properties) probability distribution for AGN specific X-ray luminosities (i.e. the ratio of LX to host stellar mass; a common proxy for Eddington ratio). However, recent studies have demonstrated that this universal Eddington ratio distribution fails to reproduce the observed X-ray luminosity functions beyond z ˜ 1.2. Furthermore, empirical studies have recently shown that the Eddington ratio distribution may instead depend upon host galaxy properties, such as SFR and/or stellar mass. To investigate this further, we develop a population synthesis model in which the Eddington ratio distribution is different for star-forming and quiescent host galaxies. We show that, although this model is able to reproduce the observed X-ray luminosity functions out to z ˜ 2, it fails to simultaneously reproduce the observed flat relationship between SFR and X-ray luminosity. We can solve this, however, by incorporating a mass dependency in the AGN Eddington ratio distribution for star-forming host galaxies. Overall, our models indicate that a relative suppression of low Eddington ratios (λEdd ≲ 0.1) in lower mass galaxies (M* ≲ 1010 - 11 M⊙) is required to reproduce both the observed X-ray luminosity functions and the observed flat SFR/X-ray relationship.

  9. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    PubMed Central

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  10. ON THE BIRTH MASSES OF THE ANCIENT GLOBULAR CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conroy, Charlie; Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA

    All globular clusters (GCs) studied to date show evidence for internal (star-to-star) variation in their light-element abundances (including Li, C, N, O, F, Na, Mg, Al, and probably He). These variations have been interpreted as evidence for multiple star formation episodes within GCs, with secondary episodes fueled, at least in part, by the ejecta of asymptotic giant branch (AGB) stars from a first generation of stars. A major puzzle emerging from this otherwise plausible scenario is that the fraction of stars associated with the second episode of star formation is observed to be much larger than expected for a standardmore » initial mass function. The present work investigates this tension by modeling the observed anti-correlation between [Na/Fe] and [O/Fe] for 20 Galactic GCs. If the abundance pattern of the retained AGB ejecta does not depend on GC mass at fixed [Fe/H], then a strong correlation is found between the fraction of current GC stellar mass composed of pure AGB ejecta, f{sub p} , and GC mass. This fraction varies from 0.20 at low masses (10{sup 4.5} M{sub Sun }) to 0.45 at high masses (10{sup 6.5} M{sub Sun }). The fraction of mass associated with pure AGB ejecta is directly related to the total mass of the cluster at birth; the ratio between the initial and present mass in stars can therefore be derived. Assuming a star formation efficiency of 50%, the observed Na-O anti-correlations imply that GCs were at least 10-20 times more massive at birth, a conclusion that is in qualitative agreement with previous work. These factors are lower limits because any mass-loss mechanism that removes first- and second-generation stars equally will leave f{sub p} unchanged. The mass dependence of f{sub p} probably arises because lower mass GCs are unable to retain all of the AGB ejecta from the first stellar generation. Recent observations of elemental abundances in intermediate-age Large Magellanic Cloud clusters are re-interpreted and shown to be consistent with this basic scenario. The small scatter in f{sub p} at fixed GC mass argues strongly that the process responsible for the large mass loss is internal to GCs. A satisfactory explanation of these trends is currently lacking.« less

  11. Modeling of turbulent chemical reaction

    NASA Technical Reports Server (NTRS)

    Chen, J.-Y.

    1995-01-01

    Viewgraphs are presented on modeling turbulent reacting flows, regimes of turbulent combustion, regimes of premixed and regimes of non-premixed turbulent combustion, chemical closure models, flamelet model, conditional moment closure (CMC), NO(x) emissions from turbulent H2 jet flames, probability density function (PDF), departures from chemical equilibrium, mixing models for PDF methods, comparison of predicted and measured H2O mass fractions in turbulent nonpremixed jet flames, experimental evidence of preferential diffusion in turbulent jet flames, and computation of turbulent reacting flows.

  12. Galaxy Environment in the 3D-HST Fields: Witnessing the Onset of Satellite Quenching at z ˜ 1-2

    NASA Astrophysics Data System (ADS)

    Fossati, M.; Wilman, D. J.; Mendel, J. T.; Saglia, R. P.; Galametz, A.; Beifiori, A.; Bender, R.; Chan, J. C. C.; Fabricius, M.; Bandara, K.; Brammer, G. B.; Davies, R.; Förster Schreiber, N. M.; Genzel, R.; Hartley, W.; Kulkarni, S. K.; Lang, P.; Momcheva, I. G.; Nelson, E. J.; Skelton, R.; Tacconi, L. J.; Tadaki, K.; Übler, H.; van Dokkum, P. G.; Wisnioski, E.; Whitaker, K. E.; Wuyts, E.; Wuyts, S.

    2017-02-01

    We make publicly available a catalog of calibrated environmental measures for galaxies in the five 3D-Hubble Space Telescope (HST)/CANDELS deep fields. Leveraging the spectroscopic and grism redshifts from the 3D-HST survey, multiwavelength photometry from CANDELS, and wider field public data for edge corrections, we derive densities in fixed apertures to characterize the environment of galaxies brighter than {{JH}}140< 24 mag in the redshift range 0.5< z< 3.0. By linking observed galaxies to a mock sample, selected to reproduce the 3D-HST sample selection and redshift accuracy, each 3D-HST galaxy is assigned a probability density function of the host halo mass, and a probability that it is a central or a satellite galaxy. The same procedure is applied to a z = 0 sample selected from Sloan Digital Sky Survey. We compute the fraction of passive central and satellite galaxies as a function of stellar and halo mass, and redshift, and then derive the fraction of galaxies that were quenched by environment specific processes. Using the mock sample, we estimate that the timescale for satellite quenching is {t}{quench}˜ 2{--}5 {Gyr}; it is longer at lower stellar mass or lower redshift, but remarkably independent of halo mass. This indicates that, in the range of environments commonly found within the 3D-HST sample ({M}h≲ {10}14 {M}⊙ ), satellites are quenched by exhaustion of their gas reservoir in the absence of cosmological accretion. We find that the quenching times can be separated into a delay phase, during which satellite galaxies behave similarly to centrals at fixed stellar mass, and a phase where the star formation rate drops rapidly ({τ }f˜ 0.4{--}0.6 Gyr), as shown previously at z = 0. We conclude that this scenario requires satellite galaxies to retain a large reservoir of multi-phase gas upon accretion, even at high redshift, and that this gas sustains star formation for the long quenching times observed.

  13. Brief communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Jaboyedoff, Michel; Cloutier, Catherine; Crosta, Giovanni B.; Lévy, Sébastien

    2016-04-01

    When calculating the risk of railway or road users of being killed by a natural hazard, one has to calculate a temporal spatial probability, i.e. the probability of a vehicle being in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case such of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle is discussed.

  14. Brief Communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, P.; Jaboyedoff, M.; Cloutier, C.; Crosta, G. B.; Lévy, S.

    2015-12-01

    When calculating the risk of railway or road users to be killed by a natural hazard, one has to calculate a "spatio-temporal probability", i.e. the probability for a vehicle to be in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle in addition is discussed.

  15. Supermassive blackholes without super Eddington accretion

    NASA Astrophysics Data System (ADS)

    Christian, Damian Joseph; Kim, Matt I.; Garofalo, David; D'Avanzo, Jaclyn; Torres, John

    2017-08-01

    We explore the X-ray luminosity function at high redshift for active galactic nuclei using an albeit simplified model for mass build-up using a combination of mergers and mass accretion in the gap paradigm (Garofalo et al. 2010). Using a retrograde-dominated configuration we find an interesting low probability channel for the growth of one billion solar mass black holes within hundreds of millions of years of the big bang without appealing to super Eddington accretion (Kim et al. 2016). This result is made more compelling by the connection between this channel and an end product involving active galaxies with FRI radio morphology but weaker jet powers in mildly sub-Eddington accretion regimes. We will discuss our connection between the unexplained paucity of a given family of AGNs and the rapid growth of supermassive black holes, two heretofore seemingly unrelated aspects of the physics of AGNs that will help further understand their properties and evolution.

  16. Relative likelihood for life as a function of cosmic time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loeb, Abraham; Batista, Rafael A.; Sloan, David, E-mail: aloeb@cfa.harvard.edu, E-mail: rafael.alvesbatista@physics.ox.ac.uk, E-mail: david.sloan@physics.ox.ac.uk

    2016-08-01

    Is life most likely to emerge at the present cosmic time near a star like the Sun? We address this question by calculating the relative formation probability per unit time of habitable Earth-like planets within a fixed comoving volume of the Universe, dP ( t )/ dt , starting from the first stars and continuing to the distant cosmic future. We conservatively restrict our attention to the context of ''life as we know it'' and the standard cosmological model, ΛCDM . We find that unless habitability around low mass stars is suppressed, life is most likely to exist near ∼more » 0.1 M {sub ⊙} stars ten trillion years from now. Spectroscopic searches for biosignatures in the atmospheres of transiting Earth-mass planets around low mass stars will determine whether present-day life is indeed premature or typical from a cosmic perspective.« less

  17. Stratified turbulent Bunsen flames: flame surface analysis and flame surface density modelling

    NASA Astrophysics Data System (ADS)

    Ramaekers, W. J. S.; van Oijen, J. A.; de Goey, L. P. H.

    2012-12-01

    In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold (FGM) reduction method for reaction kinetics. Before examining the suitability of the FSD model, flame surfaces are characterized in terms of thickness, curvature and stratification. All flames are in the Thin Reaction Zones regime, and the maximum equivalence ratio range covers 0.1⩽φ⩽1.3. For all flames, local flame thicknesses correspond very well to those observed in stretchless, steady premixed flamelets. Extracted curvature radii and mixing length scales are significantly larger than the flame thickness, implying that the stratified flames all burn in a premixed mode. The remaining challenge is accounting for the large variation in (subfilter) mass burning rate. In this contribution, the FSD model is proven to be applicable for Large Eddy Simulations (LES) of stratified flames for the equivalence ratio range 0.1⩽φ⩽1.3. Subfilter mass burning rate variations are taken into account by a subfilter Probability Density Function (PDF) for the mixture fraction, on which the mass burning rate directly depends. A priori analysis point out that for small stratifications (0.4⩽φ⩽1.0), the replacement of the subfilter PDF (obtained from DNS data) by the corresponding Dirac function is appropriate. Integration of the Dirac function with the mass burning rate m=m(φ), can then adequately model the filtered mass burning rate obtained from filtered DNS data. For a larger stratification (0.1⩽φ⩽1.3), and filter widths up to ten flame thicknesses, a β-function for the subfilter PDF yields substantially better predictions than a Dirac function. Finally, inclusion of a simple algebraic model for the FSD resulted only in small additional deviations from DNS data, thereby rendering this approach promising for application in LES.

  18. U 1608-52: a Cornerstone in Our Understanding of the Khz Qpos

    NASA Astrophysics Data System (ADS)

    Mendez, Mariano

    We propose a series of ASM-triggered TOO observations of the atoll source 4U 1608-52 in outburst for a total of 450 ksec. These triggers are planned to maximize the probability of observing kHz QPOs and type- I X-ray bursts in this source. This is one of only 2 sources where the separation between the 2 simultaneous kHz QPOs varies significantly as a function of mass accretion rate. The detection of near-coherent oscillations during the bursts will provide a strong clue to the nature of the kHz QPOs in LMXBs, as it would make clear at which mass accretion level, if any, the kHz peak separation does approach the inferred spin rate.

  19. Predicting coral bleaching in response to environmental stressors using 8 years of global-scale data.

    PubMed

    Yee, Susan Harrell; Barron, Mace G

    2010-02-01

    Coral reefs have experienced extensive mortality over the past few decades as a result of temperature-induced mass bleaching events. There is an increasing realization that other environmental factors, including water mixing, solar radiation, water depth, and water clarity, interact with temperature to either exacerbate bleaching or protect coral from mass bleaching. The relative contribution of these factors to variability in mass bleaching at a global scale has not been quantified, but can provide insights when making large-scale predictions of mass bleaching events. Using data from 708 bleaching surveys across the globe, a framework was developed to predict the probability of moderate or severe bleaching as a function of key environmental variables derived from global-scale remote-sensing data. The ability of models to explain spatial and temporal variability in mass bleaching events was quantified. Results indicated approximately 20% improved accuracy of predictions of bleaching when solar radiation and water mixing, in addition to elevated temperature, were incorporated into models, but predictive accuracy was variable among regions. Results provide insights into the effects of environmental parameters on bleaching at a global scale.

  20. Ascidian and amphioxus Adh genes correlate functional and molecular features of the ADH family expansion during vertebrate evolution.

    PubMed

    Cañestro, Cristian; Albalat, Ricard; Hjelmqvist, Lars; Godoy, Laura; Jörnvall, Hans; Gonzàlez-Duarte, Roser

    2002-01-01

    The alcohol dehydrogenase (ADH) family has evolved into at least eight ADH classes during vertebrate evolution. We have characterized three prevertebrate forms of the parent enzyme of this family, including one from an urochordate (Ciona intestinalis) and two from cephalochordates (Branchiostoma floridae and Branchiostoma lanceolatum). An evolutionary analysis of the family was performed gathering data from protein and gene structures, exon-intron distribution, and functional features through chordate lines. Our data strongly support that the ADH family expansion occurred 500 million years ago, after the cephalochordate/vertebrate split, probably in the gnathostome subphylum line of the vertebrates. Evolutionary rates differ between the ancestral, ADH3 (glutathione-dependent formaldehyde dehydrogenase), and the emerging forms, including the classical alcohol dehydrogenase, ADH1, which has an evolutionary rate 3.6-fold that of the ADH3 form. Phylogenetic analysis and chromosomal mapping of the vertebrate Adh gene cluster suggest that family expansion took place by tandem duplications, probably concurrent with the extensive isoform burst observed before the fish/tetrapode split, rather than through the large-scale genome duplications also postulated in early vertebrate evolution. The absence of multifunctionality in lower chordate ADHs and the structures compared argue in favor of the acquisition of new functions in vertebrate ADH classes. Finally, comparison between B. floridae and B. lanceolatum Adhs provides the first estimate for a cephalochordate speciation, 190 million years ago, probably concomitant with the beginning of the drifting of major land masses from the Pangea.

  1. Exploring X-Ray Binary Populations in Compact Group Galaxies With Chandra

    NASA Technical Reports Server (NTRS)

    Tzanavaris, P.; Hornschemeier, A. E..; Gallagher, S. C.; Lenkic, L.; Desjardins, T. D.; Walker, L. M.; Johnson, K. E.; Mulchaey, J. S.

    2016-01-01

    We obtain total galaxy X-ray luminosities, LX, originating from individually detected point sources in a sample of 47 galaxies in 15 compact groups of galaxies (CGs). For the great majority of our galaxies, we find that the detected point sources most likely are local to their associated galaxy, and are thus extragalactic X-ray binaries (XRBs) or nuclear active galactic nuclei (AGNs). For spiral and irregular galaxies, we find that, after accounting for AGNs and nuclear sources, most CG galaxies are either within the +/-1s scatter of the Mineo et al. LX-star formation rate (SFR) correlation or have higher LX than predicted by this correlation for their SFR. We discuss how these "excesses" may be due to low metallicities and high interaction levels. For elliptical and S0 galaxies, after accounting for AGNs and nuclear sources, most CG galaxies are consistent with the Boroson et al. LX-stellar mass correlation for low-mass XRBs, with larger scatter, likely due to residual effects such as AGN activity or hot gas. Assuming non-nuclear sources are low- or high-mass XRBs, we use appropriate XRB luminosity functions to estimate the probability that stochastic effects can lead to such extreme LX values. We find that, although stochastic effects do not in general appear to be important, for some galaxies there is a significant probability that high LX values can be observed due to strong XRB variability.

  2. A Probabilistic Model for Predicting Attenuation of Viruses During Percolation in Unsaturated Natural Barriers

    NASA Astrophysics Data System (ADS)

    Faulkner, B. R.; Lyon, W. G.

    2001-12-01

    We present a probabilistic model for predicting virus attenuation. The solution employs the assumption of complete mixing. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve 4-log attenuation. We tabulated data from related studies to develop probability density functions for input parameters, and utilized a database of soil hydraulic parameters based on the 12 USDA soil categories. Regulators can use the model based on limited information such as boring logs, climate data, and soil survey reports for a particular site of interest. Plackett-Burman sensitivity analysis indicated the most important main effects on probability of failure to achieve 4-log attenuation in our model were mean logarithm of saturated hydraulic conductivity (+0.396), mean water content (+0.203), mean solid-water mass transfer coefficient (-0.147), and the mean solid-water equilibrium partitioning coefficient (-0.144). Using the model, we predicted the probability of failure of a one-meter thick proposed hydrogeologic barrier and a water content of 0.3. With the currently available data and the associated uncertainty, we predicted soils classified as sand would fail (p=0.999), silt loams would also fail (p=0.292), but soils classified as clays would provide the required 4-log attenuation (p=0.001). The model is extendible in the sense that probability density functions of parameters can be modified as future studies refine the uncertainty, and the lightweight object-oriented design of the computer model (implemented in Java) will facilitate reuse with modified classes. This is an abstract of a proposed presentation and does not necessarily reflect EPA policy.

  3. A prototype method for diagnosing high ice water content probability using satellite imager data

    NASA Astrophysics Data System (ADS)

    Yost, Christopher R.; Bedka, Kristopher M.; Minnis, Patrick; Nguyen, Louis; Strapp, J. Walter; Palikonda, Rabindra; Khlopenkov, Konstantin; Spangenberg, Douglas; Smith, William L., Jr.; Protat, Alain; Delanoe, Julien

    2018-03-01

    Recent studies have found that ingestion of high mass concentrations of ice particles in regions of deep convective storms, with radar reflectivity considered safe for aircraft penetration, can adversely impact aircraft engine performance. Previous aviation industry studies have used the term high ice water content (HIWC) to define such conditions. Three airborne field campaigns were conducted in 2014 and 2015 to better understand how HIWC is distributed in deep convection, both as a function of altitude and proximity to convective updraft regions, and to facilitate development of new methods for detecting HIWC conditions, in addition to many other research and regulatory goals. This paper describes a prototype method for detecting HIWC conditions using geostationary (GEO) satellite imager data coupled with in situ total water content (TWC) observations collected during the flight campaigns. Three satellite-derived parameters were determined to be most useful for determining HIWC probability: (1) the horizontal proximity of the aircraft to the nearest overshooting convective updraft or textured anvil cloud, (2) tropopause-relative infrared brightness temperature, and (3) daytime-only cloud optical depth. Statistical fits between collocated TWC and GEO satellite parameters were used to determine the membership functions for the fuzzy logic derivation of HIWC probability. The products were demonstrated using data from several campaign flights and validated using a subset of the satellite-aircraft collocation database. The daytime HIWC probability was found to agree quite well with TWC time trends and identified extreme TWC events with high probability. Discrimination of HIWC was more challenging at night with IR-only information. The products show the greatest capability for discriminating TWC ≥ 0.5 g m-3. Product validation remains challenging due to vertical TWC uncertainties and the typically coarse spatio-temporal resolution of the GEO data.

  4. Excitation energy dependence of fragment-mass distributions from fission of 180,190Hg formed in fusion reactions of 36Ar + 144,154Sm

    DOE PAGES

    Nishio, K.; Andreyev, A. N.; Chapman, R.; ...

    2015-06-30

    Mass distributions of fission fragments from the compound nuclei 180Hg and 190 Hg formed in fusion reactions 36Ar + 144 Smand 36Ar + 154Sm, respectively, were measured at initial excitation energies of E*( 180Hg) = 33-66 MeV and E*( 190Hg) = 48-71 MeV. In the fission of 180Hg, the mass spectra were well reproduced by assuming only an asymmetric-mass division, with most probable light and heavy fragment masses more » $$\\overline{A}_L$$/ $$\\overline{A}_H$$ = 79/101. The mass asymmetry for 180Hg agrees well with that obtained in the low-energy β +/EC-delayed fission of 180Tl, from our earlier ISOLDE(CERN) experiment. Fission of 190Hg is found to proceed in a similar way, delivering the mass asymmetry of $$\\overline{A}_L$$/ $$\\overline{A}_H$$ = 83/107, throughout the measured excitation energy range. The persistence as a function of excitation energy of the mass-asymmetric fission for both proton-rich Hg isotopes gives strong evidence for the survival of microscopic effects up to effective excitation energies of compound nuclei as high as 40 MeV. In conclusion, this behavior is different from fission of actinide nuclei and heavier mercury isotope 198Hg.« less

  5. Multidimensional fractional Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Rodrigues, M. M.; Vieira, N.

    2012-11-01

    This work is intended to investigate the multi-dimensional space-time fractional Schrödinger equation of the form (CDt0+αu)(t,x) = iħ/2m(C∇βu)(t,x), with ħ the Planck's constant divided by 2π, m is the mass and u(t,x) is a wave function of the particle. Here (CDt0+α,C∇β are operators of the Caputo fractional derivatives, where α ∈]0,1] and β ∈]1,2]. The wave function is obtained using Laplace and Fourier transforms methods and a symbolic operational form of solutions in terms of the Mittag-Leffler functions is exhibited. It is presented an expression for the wave function and for the quantum mechanical probability density. Using Banach fixed point theorem, the existence and uniqueness of solutions is studied for this kind of fractional differential equations.

  6. DEM L241, A SUPERNOVA REMNANT CONTAINING A HIGH-MASS X-RAY BINARY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seward, F. D.; Charles, P. A.; Foster, D. L.

    2012-11-10

    A Chandra observation of the Large Magellanic Cloud supernova remnant DEM L241 reveals an interior unresolved source which is probably an accretion-powered binary. The optical counterpart is an O5III(f) star making this a high-mass X-ray binary with an orbital period likely to be of the order of tens of days. Emission from the remnant interior is thermal and spectral information is used to derive density and mass of the hot material. Elongation of the remnant is unusual and possible causes of this are discussed. The precursor star probably had mass >25 M {sub Sun}.

  7. Retired A Stars Revisited: An Updated Giant Planet Occurrence Rate as a Function of Stellar Metallicity and Mass

    NASA Astrophysics Data System (ADS)

    Ghezzi, Luan; Montet, Benjamin T.; Johnson, John Asher

    2018-06-01

    Exoplanet surveys of evolved stars have provided increasing evidence that the formation of giant planets depends not only on stellar metallicity ([Fe/H]) but also on the mass ({M}\\star ). However, measuring accurate masses for subgiants and giants is far more challenging than it is for their main-sequence counterparts, which has led to recent concerns regarding the veracity of the correlation between stellar mass and planet occurrence. In order to address these concerns, we use HIRES spectra to perform a spectroscopic analysis on a sample of 245 subgiants and derive new atmospheric and physical parameters. We also calculate the space velocities of this sample in a homogeneous manner for the first time. When reddening corrections are considered in the calculations of stellar masses and a ‑0.12 {M}ȯ offset is applied to the results, the masses of the subgiants are consistent with their space velocity distributions, contrary to claims in the literature. Similarly, our measurements of their rotational velocities provide additional confirmation that the masses of subgiants with {M}\\star ≥slant 1.6 M ⊙ (the “retired A stars”) have not been overestimated in previous analyses. Using these new results for our sample of evolved stars, together with an updated sample of FGKM dwarfs, we confirm that giant planet occurrence increases with both stellar mass and metallicity up to 2.0 M ⊙. We show that the probability of formation of a giant planet is approximately a one-to-one function of the total amount of metals in the protoplanetary disk {M}\\star {10}[{Fe/{{H}}]}. This correlation provides additional support for the core accretion mechanism of planet formation.

  8. SN 1987A - The evolution from red to blue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuchman, Y.; Wheeler, J.C.

    1989-11-01

    Envelope models in thermal and dynamic equilibrium are used to explore the nature of the transition of SK -69 deg 202, the progenitor of SN 1987A, from the Hayashi track to its final blue position in the H-R diagram. Loci of possible thermal equilibrium solutions are presented as a function of Teff and M(C/O), the mass of the carbon/oxygen core interior to the helium burning shell. It is found that uniform helium enrichment of the envelope results in red-blue evolution but that the resulting blue solution is much hotter than SK -69 deg 202. Solutions in which the only changemore » is to redistribute the portion of the envelope enriched in helium during main-sequence convective core contraction into a step function with Y of about 0.5 at a mass cut of about 10 solar masses give a natural transition from red to blue and a final value of Teff in agreement with observations. It is argued that SK -69 deg 202 probably fell on a post-Hayashi track sequence at moderate Teff. The possible connection of this sequence to the step distribution in the H-R diagram of the LMC. 19 refs.« less

  9. Non-Maxwellian electron energy probability functions in the plume of a SPT-100 Hall thruster

    NASA Astrophysics Data System (ADS)

    Giono, G.; Gudmundsson, J. T.; Ivchenko, N.; Mazouffre, S.; Dannenmayer, K.; Loubère, D.; Popelier, L.; Merino, M.; Olentšenko, G.

    2018-01-01

    We present measurements of the electron density, the effective electron temperature, the plasma potential, and the electron energy probability function (EEPF) in the plume of a 1.5 kW-class SPT-100 Hall thruster, derived from cylindrical Langmuir probe measurements. The measurements were taken on the plume axis at distances between 550 and 1550 mm from the thruster exit plane, and at different angles from the plume axis at 550 mm for three operating points of the thruster, characterized by different discharge voltages and mass flow rates. The bulk of the electron population can be approximated as a Maxwellian distribution, but the measured distributions were seen to decline faster at higher energy. The measured EEPFs were best modelled with a general EEPF with an exponent α between 1.2 and 1.5, and their axial and angular characteristics were studied for the different operating points of the thruster. As a result, the exponent α from the fitted distribution was seen to be almost constant as a function of the axial distance along the plume, as well as across the angles. However, the exponent α was seen to be affected by the mass flow rate, suggesting a possible relationship with the collision rate, especially close to the thruster exit. The ratio of the specific heats, the γ factor, between the measured plasma parameters was found to be lower than the adiabatic value of 5/3 for each of the thruster settings, indicating the existence of non-trivial kinetic heat fluxes in the near collisionless plume. These results are intended to be used as input and/or testing properties for plume expansion models in further work.

  10. Core Emergence in a Massive Infrared Dark Cloud: A Comparison between Mid-IR Extinction and 1.3 mm Emission

    NASA Astrophysics Data System (ADS)

    Kong, Shuo; Tan, Jonathan C.; Arce, Héctor G.; Caselli, Paola; Fontani, Francesco; Butler, Michael J.

    2018-03-01

    Stars are born from dense cores in molecular clouds. Observationally, it is crucial to capture the formation of cores in order to understand the necessary conditions and rate of the star formation process. The Atacama Large Millimeter/submillimeter Array (ALMA) is extremely powerful for identifying dense gas structures, including cores, at millimeter wavelengths via their dust continuum emission. Here, we use ALMA to carry out a survey of dense gas and cores in the central region of the massive (∼105 M ⊙) infrared dark cloud (IRDC) G28.37+0.07. The observation consists of a mosaic of 86 pointings of the 12 m array and produces an unprecedented view of the densest structures of this IRDC. In this first Letter about this data set, we focus on a comparison between the 1.3 mm continuum emission and a mid-infrared (MIR) extinction map of the IRDC. This allows estimation of the “dense gas” detection probability function (DPF), i.e., as a function of the local mass surface density, Σ, for various choices of thresholds of millimeter continuum emission to define “dense gas.” We then estimate the dense gas mass fraction, f dg, in the central region of the IRDC and, via extrapolation with the DPF and the known Σ probability distribution function, to the larger-scale surrounding regions, finding values of about 5% to 15% for the fiducial choice of threshold. We argue that this observed dense gas is a good tracer of the protostellar core population and, in this context, estimate a star formation efficiency per free-fall time in the central IRDC region of ɛ ff ∼ 10%, with approximately a factor of two systematic uncertainties.

  11. 47 CFR 1.1623 - Probability calculation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be...

  12. On the Stellar Population and Star-Forming History of the Orion Nebula Cluster

    NASA Astrophysics Data System (ADS)

    Hillenbrand, Lynne A.

    1997-05-01

    We report on the first phase of a study of the stellar population comprising the Orion Nebula Cluster (ONC). Approximately 50% of the ~ 3500 stars identified to date within ~ 2.5 pc of the namesake Trapezium stars are optically visible, and in this paper we focus on that sample with I < 17.5 mag. The large number and number density (npeak > 10(4) pc(-3) ) of stars, the wide range in stellar mass ( ~ 0.1-50 M_⊙), and the extreme youth (< 1-2 Myr) of the stellar population, make the ONC the best site for investigating: 1) the detailed shape of a truly ``initial'' mass spectrum; 2) the apparent age spread in a region thought to have undergone triggered star formation; 3) the time sequence of star formation as a function of stellar mass; and 4) trends of all of the above with cluster radius. Nearly 60% of the ~ 1600 optical stars have sufficient data (spectroscopy and photometry) for placement on a theoretical HR diagram; this subsample is unbiased with respect to apparent brightness or cluster radius, complete down to ~ 1 M_⊙, and representative of the total optical sample below ~ 1 M_⊙ for the age and extinction ranges characteristic of the cluster. Comparison of the derived HR diagram with traditional pre-main sequence evolutionary calculations shows a trend of increasing stellar age with increasing stellar mass. To avoid the implication of earlier characteristic formation times for higher-mass stars than for lower-mass stars, refinement of early evolutionary theory in a manner similar to the birthline hypothesis of Palla & Stahler (1993), is required. Subject to uncertainties in the tracks and isochrones, we can still investigate stellar mass and age distributions in the ONC. We find the ONC as a whole to be characterized by a mass spectrum which is not grossly inconsistent with ``standard'' stellar mass spectra. In particular, although there are structural differences between the detailed ONC mass spectrum and various models constructed from solar neighborhood data, the observed mass spectrum appears to a peak at ~ 0.2 M_⊙ and to fall off rapidly towards lower masses; several substellar objects are present. The abundance of low-mass stars relative to high-mass stars suggests that there is no bi-modal star formation mode; somewhat ironically, the ONC probably contains fractionally more low-mass stars than the solar neighborhood since the population not yet located on the HR diagram is dominated by sub-solar-mass stars. Nonetheless, the ONC mass spectrum is biased towards higher-mass stars within the innermost cluster radii (rprojected < 0.3 pc). We find the ONC as a whole to be characterized by a mean age of < 1 Myr and an age spread which is probably less than 2 Myr, but also by a bias towards younger stars at smaller projected cluster radii. Although the most massive stars and the youngest stars are found preferentially towards the center of the ONC it does not follow that the most massive stars are the youngest stars. A lower limit to the total cluster mass in stars is Mstars ~ 900 M_⊙ (probably a factor of < 2 underestimate). A lower limit to the recent star formation rate is ~ 10(-4) M_⊙ yr(-1) . All observational data in this study as well as stellar parameters derived from them are available in electronic format.

  13. The photodetachment cross-section and threshold energy of negative ions in carbon dioxide

    NASA Technical Reports Server (NTRS)

    Helmy, E. M.; Woo, S. B.

    1974-01-01

    Threshold energy and sunlight photodetachment measurements on negative carbon dioxide ions, using a 2.5 kw light pressure xenon lamp, show that: (1) Electron affinity of CO3(+) is larger than 2.7 e.V. and that an isomeric form of CO3(+) is likely an error; (2) The photodetachment cross section of CO3(-) will roughly be like a step function across the range of 4250 to 2500A, having its threshold energy at 4250A; (3) Sunlight photodetachment rate for CO3(-) is probably much smaller than elsewhere reported; and (4) The probability of having photodetached electrons re-attach to form negative ions is less than 1%. Mass identifying drift tube tests confirm that the slower ion is CO3(-), formed through the O(-) + 2CO2 yields CO3(-) + CO2 reaction.

  14. The Nature and Evolutionary History of GRO J1744-28

    NASA Technical Reports Server (NTRS)

    Rappaport, S.

    1997-01-01

    GRO J1744-28 is the first known X-ray source to display bursts, periodic pulsations, and quasi-periodic oscillations. This source may thus provide crucial clues that will lead to an understanding of the differences in the nature of the X-ray variability from various accreting neutron stars. The orbital period is 11.8 days, and the measured mass function of 1.31 x 10(exp -4) solar mass is one of the smallest among all known binaries. If we assume that the donor star is a low-mass giant transferring matter through the inner Lagrange point, then we can show that its mass is lower than approximately 0.7 solar mass and probably closer to 0.25 solar mass. Higher mass, but unevolved, donor stars are shown to be implausible. We also demonstrate that the current He core mass of the donor star lies in the range of 0.20-0.25 solar mass. Thus, this system is most likely in the final stages of losing its hydrogen-rich envelope, with only a small amount of mass remaining in the envelope. If this picture is correct, then GRO J1744-28 may well represent the closest observational link that we have between the low-mass X-ray binaries and recycled binary pulsars in wide orbits. We have carried out a series of binary evolution calculations and explored, both systematically and via a novel Monte Carlo approach, the range of initial system parameters and input physics that can lead to the binary parameters of the present-day GRO J1744-28 system. The input parameters include both the initial total mass and the core mass of the donor star, the neutron-star mass, the strength of the magnetic braking, the mass-capture fraction, and the specifics of the core mass/radius relation for giants. Through these evolution calculations, we compute probability distributions for the current binary system parameters (i.e., the total mass, core mass, radius, luminosity, and K-band magnitude of the donor star, the neutron star mass, the orbital inclination angle, and the semimajor axis of the binary). Our calculations yield the following values for the GRO J1744-28 system parameters (with 95% confidence limits in parentheses): donor star mass: 0.24 solar mass (0.2-0.7 solar mass); He core mass of the donor star: 0.22 solar mass (0.20-0.25 solar mass); neutron-star mass: 1.7 solar mass (1.39-1.96 solar mass); orbital inclination angle: 18deg (7deg-22deg); semi- major axis: 64 lt-s (60-67 lt-s); radius of the donor star: 6.2 solar radius(6-9 solar radius); luminosity of donor star: 23 solar luminosity (15-49 solar luminosity), and long-term mass transfer rate at the current epoch: 5 x 10(exp -10)solar mass/yr (2 x 10(exp -10) to 5 x 10(exp -9)solar mass/yr). We deduce that the magnetic field of the underlying neutron star lies in the range of approximately 1.8 x 10(exp 11)G to approximately 7 x 10(exp 11)G, with a most probable value of 2.7 x 10(exp 11)G. This is evidently sufficiently strong to funnel the accretion flow onto the magnetic polar caps and suppress the thermonuclear flashes that would otherwise give rise to the type 1 X-ray bursts observed in most X-ray bursters. We present a simple paradigm for magnetic accreting neutron stars where X-ray pulsars, GRO J1744-28, the Rapid Burster, and the type 1 X-ray bursters may form a continuum of possible behaviors among accreting neutron stars, with the strength of the neutron-star magnetic field serving as a crucial parameter that determines the mode of X-ray variability from a given object.

  15. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    PubMed

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  16. VERY LOW MASS STELLAR AND SUBSTELLAR COMPANIONS TO SOLAR-LIKE STARS FROM MARVELS. V. A LOW ECCENTRICITY BROWN DWARF FROM THE DRIEST PART OF THE DESERT, MARVELS-6b

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Lee, Nathan; Stassun, Keivan G.; Cargile, Phillip

    2013-06-15

    We describe the discovery of a likely brown dwarf (BD) companion with a minimum mass of 31.7 {+-} 2.0 M{sub Jup} to GSC 03546-01452 from the MARVELS radial velocity survey, which we designate as MARVELS-6b. For reasonable priors, our analysis gives a probability of 72% that MARVELS-6b has a mass below the hydrogen-burning limit of 0.072 M{sub Sun }, and thus it is a high-confidence BD companion. It has a moderately long orbital period of 47.8929{sup +0.0063}{sub -0.0062} days with a low eccentricity of 0.1442{sup +0.0078}{sub -0.0073}, and a semi-amplitude of 1644{sup +12}{sub -13} m s{sup -1}. Moderate resolution spectroscopymore » of the host star has determined the following parameters: T{sub eff} = 5598 {+-} 63, log g = 4.44 {+-} 0.17, and [Fe/H] = +0.40 {+-} 0.09. Based upon these measurements, GSC 03546-01452 has a probable mass and radius of M{sub *} = 1.11 {+-} 0.11 M{sub Sun} and R{sub *} = 1.06 {+-} 0.23 R{sub Sun} with an age consistent with less than {approx}6 Gyr at a distance of 219 {+-} 21 pc from the Sun. Although MARVELS-6b is not observed to transit, we cannot definitively rule out a transiting configuration based on our observations. There is a visual companion detected with Lucky Imaging at 7.''7 from the host star, but our analysis shows that it is not bound to this system. The minimum mass of MARVELS-6b exists at the minimum of the mass functions for both stars and planets, making this a rare object even compared to other BDs. It also exists in an underdense region in both period/eccentricity and metallicity/eccentricity space.« less

  17. Exploring the IMF of star clusters: a joint SLUG and LEGUS effort

    NASA Astrophysics Data System (ADS)

    Ashworth, G.; Fumagalli, M.; Krumholz, M. R.; Adamo, A.; Calzetti, D.; Chandar, R.; Cignoni, M.; Dale, D.; Elmegreen, B. G.; Gallagher, J. S., III; Gouliermis, D. A.; Grasha, K.; Grebel, E. K.; Johnson, K. E.; Lee, J.; Tosi, M.; Wofford, A.

    2017-08-01

    We present the implementation of a Bayesian formalism within the Stochastically Lighting Up Galaxies (slug) stellar population synthesis code, which is designed to investigate variations in the initial mass function (IMF) of star clusters. By comparing observed cluster photometry to large libraries of clusters simulated with a continuously varying IMF, our formalism yields the posterior probability distribution function (PDF) of the cluster mass, age and extinction, jointly with the parameters describing the IMF. We apply this formalism to a sample of star clusters from the nearby galaxy NGC 628, for which broad-band photometry in five filters is available as part of the Legacy ExtraGalactic UV Survey (LEGUS). After allowing the upper-end slope of the IMF (α3) to vary, we recover PDFs for the mass, age and extinction that are broadly consistent with what is found when assuming an invariant Kroupa IMF. However, the posterior PDF for α3 is very broad due to a strong degeneracy with the cluster mass, and it is found to be sensitive to the choice of priors, particularly on the cluster mass. We find only a modest improvement in the constraining power of α3 when adding Hα photometry from the companion Hα-LEGUS survey. Conversely, Hα photometry significantly improves the age determination, reducing the frequency of multi-modal PDFs. With the aid of mock clusters, we quantify the degeneracy between physical parameters, showing how constraints on the cluster mass that are independent of photometry can be used to pin down the IMF properties of star clusters.

  18. The orbit and companion of the Cepheid S Sge - A probable triple system

    NASA Technical Reports Server (NTRS)

    Evans, Nancy R.; Welch, Douglas L.; Slovak, Mark H.; Barnes, Thomas G., III; Moffett, Thomas J.

    1993-01-01

    New radial velocities for the classical Cepheid S Sge have been obtained and combined with previous observations to derive a new orbit. The revised orbital elements are: gamma, -10.3 +/- 0.4 km/s; K, 15.5 +/- 0.2 km/s; e, 0.23 +/- 0.02; omega, 203.1 +/- 4.2 deg; T0, 39902.3 +/- 6.6 JD; P, 675.79 +/- 0.18 days; f(m), 0.239 +/- 0.010 solar masses; a sin i, 0.935 AU = 139.9 +/- 2.0 x 10 exp 6 km; s.e., 1.2 km/s. The revised elements differ very little from the orbit determined by Herbig and Moore (1952). We have also obtained low resolution IUE spectra to search for the companion. The IUE spectra show excess flux at 1800 A when compared with spectra of the single Cepheid Delta Cep at the same (B-V)0. The spectral type of the companion determined from this flux excess is A7 V to F0 V. However, the mass of such a companion (1.7 to 1.5 solar masses) is smaller than the minimum mass (2.8 solar masses) required by the mass function and an evolutionary mass of the Cepheid. We infer that the companion is itself a short period binary.

  19. [Significance of insulin resistance in the pathogenesis of sarcopenia and chronic heart failure in elderly hypertensive patients].

    PubMed

    Gorshunova, N K; Medvedev, N V

    2016-01-01

    To determine the pathogenic role of insulin resistance in the formation of involutive sarcopenia and chronic heart failure (CHF) were examined 88 elderly patients with arterial hypertension (AH) and 32 elderly patients without cardiovascular disease by methods of carbohydrate metabolism and the level of brain natriuretic peptide precursor evaluation, muscle mass and strength measuring, echocardiography, 6 minute walking test. It was found that in the group of hypertensive patients with low mass and muscle strength significantly increased indices of insulin resistance and more expressed signs of the left ventricle myocardial dysfunction and functional class of heart failure, probably as a result of disorders of energy homeostasis, resulting from the deterioration of glucose into the muscle cells of the heart and skeletal muscles.

  20. Joint genome-wide prediction in several populations accounting for randomness of genotypes: A hierarchical Bayes approach. I: Multivariate Gaussian priors for marker effects and derivation of the joint probability mass function of genotypes.

    PubMed

    Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A

    2017-03-21

    It is important to consider heterogeneity of marker effects and allelic frequencies in across population genome-wide prediction studies. Moreover, all regression models used in genome-wide prediction overlook randomness of genotypes. In this study, a family of hierarchical Bayesian models to perform across population genome-wide prediction modeling genotypes as random variables and allowing population-specific effects for each marker was developed. Models shared a common structure and differed in the priors used and the assumption about residual variances (homogeneous or heterogeneous). Randomness of genotypes was accounted for by deriving the joint probability mass function of marker genotypes conditional on allelic frequencies and pedigree information. As a consequence, these models incorporated kinship and genotypic information that not only permitted to account for heterogeneity of allelic frequencies, but also to include individuals with missing genotypes at some or all loci without the need for previous imputation. This was possible because the non-observed fraction of the design matrix was treated as an unknown model parameter. For each model, a simpler version ignoring population structure, but still accounting for randomness of genotypes was proposed. Implementation of these models and computation of some criteria for model comparison were illustrated using two simulated datasets. Theoretical and computational issues along with possible applications, extensions and refinements were discussed. Some features of the models developed in this study make them promising for genome-wide prediction, the use of information contained in the probability distribution of genotypes is perhaps the most appealing. Further studies to assess the performance of the models proposed here and also to compare them with conventional models used in genome-wide prediction are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Determination of the mass of globular cluster X-ray sources

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Hertz, P.; Steiner, J. E.; Murray, S. S.; Lightman, A. P.

    1984-01-01

    The precise positions of the luminous X-ray sources in eight globular clusters have been measured with the Einstein X-Ray Observatory. When combined with similarly precise measurements of the dynamical centers and core radii of the globular clusters, the distribution of the X-ray source mass is determined to be in the range 0.9-1.9 solar mass. The X-ray source positions and the detailed optical studies indicate that (1) the sources are probably all of similar mass, (2) the gravitational potentials in these high-central density clusters are relatively smooth and isothermal, and (3) the X-ray sources are compact binaries and are probably formed by tidal capture.

  2. GOSSIP: SED fitting code

    NASA Astrophysics Data System (ADS)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  3. THE LANDSCAPE OF THE NEUTRINO MECHANISM OF CORE-COLLAPSE SUPERNOVAE: NEUTRON STAR AND BLACK HOLE MASS FUNCTIONS, EXPLOSION ENERGIES, AND NICKEL YIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pejcha, Ondřej; Thompson, Todd A., E-mail: pejcha@astro.princeton.edu, E-mail: thompson@astronomy.ohio-state.edu

    2015-03-10

    If the neutrino luminosity from the proto-neutron star formed during a massive star core collapse exceeds a critical threshold, a supernova (SN) results. Using spherical quasi-static evolutionary sequences for hundreds of progenitors over a range of metallicities, we study how the explosion threshold maps onto observables, including the fraction of successful explosions, the neutron star (NS) and black hole (BH) mass functions, the explosion energies (E {sub SN}) and nickel yields (M {sub Ni}), and their mutual correlations. Successful explosions are intertwined with failures in a complex pattern that is not simply related to initial progenitor mass or compactness. Wemore » predict that progenitors with initial masses of 15 ± 1, 19 ± 1, and ∼21-26 M {sub ☉} are most likely to form BHs, that the BH formation probability is non-zero at solar-metallicity and increases significantly at low metallicity, and that low luminosity, low Ni-yield SNe come from progenitors close to success/failure interfaces. We qualitatively reproduce the observed E {sub SN}-M {sub Ni} correlation, we predict a correlation between the mean and width of the NS mass and E {sub SN} distributions, and that the means of the NS and BH mass distributions are correlated. We show that the observed mean NS mass of ≅ 1.33 M {sub ☉} implies that the successful explosion fraction is higher than 0.35. Overall, we show that the neutrino mechanism can in principle explain the observed properties of SNe and their compact objects. We argue that the rugged landscape of progenitors and outcomes mandates that SN theory should focus on reproducing the wide ranging distributions of observed SN properties.« less

  4. Measurement of the fusion probability P{sub CN} for the reaction of {sup 50}Ti with {sup 208}Pb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naik, R. S.; Loveland, W.; Sprunger, P. H.

    2007-11-15

    The capture cross sections and fission fragment angular distributions were measured for the reaction of {sup 50}Ti with {sup 208}Pb at center of mass projectile energies (E{sub c.m.}) of 183.7, 186.2, 190.2, 194.2, and 202.3 MeV (E*=14.2, 16.6, 20.6, 24.7, and 32.7 MeV). From fitting the backward angle fragment angular distributions, the cross sections for quasifission and fusion-fission and P{sub CN}, the probability that the colliding nuclei go from the contact configuration to inside the fission saddle point, were deduced. These quantities, along with the known values of the evaporation residue production cross sections for this reaction, were used tomore » deduce values of the survival probabilities, W{sub sur}, for this reaction as a function of excitation energy. The deduced values of P{sub CN} and W{sub sur} and their dependence on excitation energy differ from some current theoretical predictions of these quantities.« less

  5. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    NASA Astrophysics Data System (ADS)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  6. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  7. Addendum to "Compact Perturbative Expressions for Neutrino Oscillations in Matter"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denton, Peter B.; Minakata, Hisakazu; Parke, Stephen J.

    2018-01-19

    In this paper we rewrite the neutrino mixing angles and mass squared differences in matter given, in our original paper, in a notation that is more conventional for the reader. Replacing the usual neutrino mixing angles and mass squared differences in the expressions for the vacuum oscillation probabilities with these matter mixing angles and mass squared differences gives an excellent approximation to the oscillation probabilities in matter. Comparisons for T2K, NOvA, T2HKK and DUNE are also given for neutrinos and anti-neutrinos, disappearance and appearance channels, normal ordering and inverted ordering.

  8. Transit timing variations for planets co-orbiting in the horseshoe regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vokrouhlický, David; Nesvorný, David, E-mail: vokrouhl@cesnet.cz, E-mail: davidn@boulder.swri.edu

    2014-08-10

    Although not yet detected, pairs of exoplanets in 1:1 mean motion resonance probably exist. Low eccentricity, near-planar orbits, which in the comoving frame follow horseshoe trajectories, are one of the possible stable configurations. Here we study transit timing variations (TTVs) produced by mutual gravitational interaction of planets in this orbital architecture, with the goal to develop methods that can be used to recognize this case in observational data. In particular, we use a semi-analytic model to derive parametric constraints that should facilitate data analysis. We show that characteristic traits of the TTVs can directly constrain the (1) ratio of planetarymore » masses and (2) their total mass (divided by that of the central star) as a function of the minimum angular separation as seen from the star. In an ideal case, when transits of both planets are observed and well characterized, the minimum angular separation can also be inferred from the data. As a result, parameters derived from the observed transit timing series alone can directly provide both planetary masses scaled to the central star mass.« less

  9. Method for predicting peptide detection in mass spectrometry

    DOEpatents

    Kangas, Lars [West Richland, WA; Smith, Richard D [Richland, WA; Petritis, Konstantinos [Richland, WA

    2010-07-13

    A method of predicting whether a peptide present in a biological sample will be detected by analysis with a mass spectrometer. The method uses at least one mass spectrometer to perform repeated analysis of a sample containing peptides from proteins with known amino acids. The method then generates a data set of peptides identified as contained within the sample by the repeated analysis. The method then calculates the probability that a specific peptide in the data set was detected in the repeated analysis. The method then creates a plurality of vectors, where each vector has a plurality of dimensions, and each dimension represents a property of one or more of the amino acids present in each peptide and adjacent peptides in the data set. Using these vectors, the method then generates an algorithm from the plurality of vectors and the calculated probabilities that specific peptides in the data set were detected in the repeated analysis. The algorithm is thus capable of calculating the probability that a hypothetical peptide represented as a vector will be detected by a mass spectrometry based proteomic platform, given that the peptide is present in a sample introduced into a mass spectrometer.

  10. Programmed disorders of beta-cell development and function as one cause for type 2 diabetes? The GK rat paradigm.

    PubMed

    Portha, Bernard

    2005-01-01

    Now that the reduction in beta-mass has been clearly established in humans with type 2 diabetes mellitus (T2DM) 1-4, the debate focuses on the possible mechanisms responsible for decreased beta-cell number and impaired beta-cell function and their multifactorial etiology. Appropriate inbred rodent models are essential tools for identification of genes and environmental factors that increase the risk of abnormal beta-cell function and of T2DM. The information available in the Goto-Kakizaki (GK) rat, one of the best characterized animal models of spontaneous T2DM, are reviewed in such a perspective. We propose that the defective beta-cell mass and function in the GK model reflect the complex interactions of three pathogenic players: (1) several independent loci containing genes causing impaired insulin secretion; (2) gestational metabolic impairment inducing a programming of endocrine pancreas (decreased beta-cell neogenesis) which is transmitted to the next generation; and (3) secondary (acquired) loss of beta-cell differentiation due to chronic exposure to hyperglycemia (glucotoxicity). An important message is that the 'heritable' determinants of T2DM are not simply dependant on genetic factors, but probably involve transgenerational epigenetic responses. Copyright (c) 2005 John Wiley & Sons, Ltd.

  11. The structure and statistics of interstellar turbulence

    NASA Astrophysics Data System (ADS)

    Kritsuk, A. G.; Ustyugov, S. D.; Norman, M. L.

    2017-06-01

    We explore the structure and statistics of multiphase, magnetized ISM turbulence in the local Milky Way by means of driven periodic box numerical MHD simulations. Using the higher order-accurate piecewise-parabolic method on a local stencil (PPML), we carry out a small parameter survey varying the mean magnetic field strength and density while fixing the rms velocity to observed values. We quantify numerous characteristics of the transient and steady-state turbulence, including its thermodynamics and phase structure, kinetic and magnetic energy power spectra, structure functions, and distribution functions of density, column density, pressure, and magnetic field strength. The simulations reproduce many observables of the local ISM, including molecular clouds, such as the ratio of turbulent to mean magnetic field at 100 pc scale, the mass and volume fractions of thermally stable Hi, the lognormal distribution of column densities, the mass-weighted distribution of thermal pressure, and the linewidth-size relationship for molecular clouds. Our models predict the shape of magnetic field probability density functions (PDFs), which are strongly non-Gaussian, and the relative alignment of magnetic field and density structures. Finally, our models show how the observed low rates of star formation per free-fall time are controlled by the multiphase thermodynamics and large-scale turbulence.

  12. Quantum diffusion during inflation and primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pattison, Chris; Assadullahi, Hooshyar; Wands, David

    We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. Inmore » the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ∼ 1 e -fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.« less

  13. Quantum diffusion during inflation and primordial black holes

    NASA Astrophysics Data System (ADS)

    Pattison, Chris; Vennin, Vincent; Assadullahi, Hooshyar; Wands, David

    2017-10-01

    We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. In the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ~ 1 e-fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.

  14. The effect of wind and eruption source parameter variations on tephra fallout hazard assessment: an example from Vesuvio (Italy)

    NASA Astrophysics Data System (ADS)

    Macedonio, Giovanni; Costa, Antonio; Scollo, Simona; Neri, Augusto

    2015-04-01

    Uncertainty in the tephra fallout hazard assessment may depend on different meteorological datasets and eruptive source parameters used in the modelling. We present a statistical study to analyze this uncertainty in the case of a sub-Plinian eruption of Vesuvius of VEI = 4, column height of 18 km and total erupted mass of 5 × 1011 kg. The hazard assessment for tephra fallout is performed using the advection-diffusion model Hazmap. Firstly, we analyze statistically different meteorological datasets: i) from the daily atmospheric soundings of the stations located in Brindisi (Italy) between 1962 and 1976 and between 1996 and 2012, and in Pratica di Mare (Rome, Italy) between 1996 and 2012; ii) from numerical weather prediction models of the National Oceanic and Atmospheric Administration and of the European Centre for Medium-Range Weather Forecasts. Furthermore, we modify the total mass, the total grain-size distribution, the eruption column height, and the diffusion coefficient. Then, we quantify the impact that different datasets and model input parameters have on the probability maps. Results shows that the parameter that mostly affects the tephra fallout probability maps, keeping constant the total mass, is the particle terminal settling velocity, which is a function of the total grain-size distribution, particle density and shape. Differently, the evaluation of the hazard assessment weakly depends on the use of different meteorological datasets, column height and diffusion coefficient.

  15. Equivalence principle for quantum systems: dephasing and phase shift of free-falling particles

    NASA Astrophysics Data System (ADS)

    Anastopoulos, C.; Hu, B. L.

    2018-02-01

    We ask the question of how the (weak) equivalence principle established in classical gravitational physics should be reformulated and interpreted for massive quantum objects that may also have internal degrees of freedom (dof). This inquiry is necessary because even elementary concepts like a classical trajectory are not well defined in quantum physics—trajectories originating from quantum histories become viable entities only under stringent decoherence conditions. From this investigation we posit two logically and operationally distinct statements of the equivalence principle for quantum systems. Version A: the probability distribution of position for a free-falling particle is the same as the probability distribution of a free particle, modulo a mass-independent shift of its mean. Version B: any two particles with the same velocity wave-function behave identically in free fall, irrespective of their masses. Both statements apply to all quantum states, including those without a classical correspondence, and also for composite particles with quantum internal dof. We also investigate the consequences of the interaction between internal and external dof induced by free fall. For a class of initial states, we find dephasing occurs for the translational dof, namely, the suppression of the off-diagonal terms of the density matrix, in the position basis. We also find a gravitational phase shift in the reduced density matrix of the internal dof that does not depend on the particle’s mass. For classical states, the phase shift has a natural classical interpretation in terms of gravitational red-shift and special relativistic time-dilation.

  16. Population trends, survival, and sampling methodologies for a population of Rana draytonii

    USGS Publications Warehouse

    Fellers, Gary M.; Kleeman, Patrick M.; Miller, David A.W.; Halstead, Brian J.

    2017-01-01

    Estimating population trends provides valuable information for resource managers, but monitoring programs face trade-offs between the quality and quantity of information gained and the number of sites surveyed. We compared the effectiveness of monitoring techniques for estimating population trends of Rana draytonii (California Red-legged Frog) at Point Reyes National Seashore, California, USA, over a 13-yr period. Our primary goals were to: 1) estimate trends for a focal pond at Point Reyes National Seashore, and 2) evaluate whether egg mass counts could reliably estimate an index of abundance relative to more-intensive capture–mark–recapture methods. Capture–mark–recapture (CMR) surveys of males indicated a stable population from 2005 to 2009, despite low annual apparent survival (26.3%). Egg mass counts from 2000 to 2012 indicated that despite some large fluctuations, the breeding female population was generally stable or increasing, with annual abundance varying between 26 and 130 individuals. Minor modifications to egg mass counts, such as marking egg masses, can allow estimation of egg mass detection probabilities necessary to convert counts to abundance estimates, even when closure of egg mass abundance cannot be assumed within a breeding season. High egg mass detection probabilities (mean per-survey detection probability = 0.98 [0.89–0.99]) indicate that egg mass surveys can be an efficient and reliable method for monitoring population trends of federally threatened R. draytonii. Combining egg mass surveys to estimate trends at many sites with CMR methods to evaluate factors affecting adult survival at focal populations is likely a profitable path forward to enhance understanding and conservation of R. draytonii.

  17. The First measurement of the top quark mass at CDF II in the lepton+jets and dilepton channels simultaneously

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaltonen, T.; /Helsinki Inst. of Phys.; Adelman, J.

    2008-09-01

    The authors present a measurement of the mass of the top quark using data corresponding to an integrated luminosity of 1.9 fb{sup -1} of p{bar p} collisions collected at {radical}s = 1.96 TeV with the CDF II detector at Fermilab's Tevatron. This is the first measurement of the top quark mass using top-antitop pair candidate events in the lepton + jets and dilepton decay channels simultaneously. They reconstruct two observables in each channel and use a non-parametric kernel density estimation technique to derive two-dimensional probability density functions from simulated signal and background samples. The observables are the top quark massmore » and the invariant mass of two jets from the W decay in the lepton + jets channel, and the top quark mass and the scalar sum of transverse energy of the event in the diletpon channel. They perform a simultaneous fit for the top quark mass and the jet energy scale, which is constrained in situ by the hadronic W boson mass. using 332 lepton + jets candidate events and 144 diletpon candidate events, they measure the top quark mass to be m{sub top} = 171.9 {+-} 1.7 (stat. + JES) {+-} 1.1 (other sys.) GeV/c{sup 2} = 171.9 {+-} 2.0 GeV/c{sup 2}.« less

  18. A systematic review of probable posttraumatic stress disorder in first responders following man-made mass violence.

    PubMed

    Wilson, Laura C

    2015-09-30

    The current study was a systematic review examining probable posttraumatic stress disorder (PTSD) in first responders following man-made mass violence. A systematic literature search yielded 20 studies that fit the inclusion criteria. The prevalence rates of probable PTSD across all 20 studies ranged from 1.3% to 22.0%. Fifteen of the 20 articles focused on first responders following the September 11th terrorist attacks and many of the studies used the same participant recruitment pools. Overall, the results of the systematic review described here suggest that our understanding of PTSD in first responders following man-made mass violence is based on a very small set of articles that have focused on a few particular events. This paper is meant to serve as a call for additional research and to encourage more breadth in the specific incidents that are examined. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Mass and angular distributions of the reaction products in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Nasirov, A. K.; Giardina, G.; Mandaglio, G.; Kayumov, B. M.; Tashkhodjaev, R. B.

    2018-05-01

    The optimal reactions and beam energies leading to synthesize superheavy elements is searched by studying mass and angular distributions of fission-like products in heavy-ion collisions since the evaporation residue cross section consists an ignorable small part of the fusion cross section. The intensity of the yield of fission-like products allows us to estimate the probability of the complete fusion of the interacting nuclei. The overlap of the mass and angular distributions of the fusion-fission and quasifission products causes difficulty at estimation of the correct value of the probability of the compound nucleus formation. A study of the mass and angular distributions of the reaction products is suitable key to understand the interaction mechanism of heavy ion collisions.

  20. Probability Weighting Functions Derived from Hyperbolic Time Discounting: Psychophysical Models and Their Individual Level Testing.

    PubMed

    Takemura, Kazuhisa; Murakami, Hajime

    2016-01-01

    A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.

  1. The high-energy X-ray spectrum of black hole candidate GX 339-4 during a transition

    NASA Technical Reports Server (NTRS)

    Dolan, J. F.; Crannell, C. J.; Dennis, B. R.; Orwig, L. E.

    1987-01-01

    The X-ray emitting system GX 339-4 contains one of the prime candidates for a stellar mass-sized black hole. Determining the observational similarities and differences between the members of this group is of value in specifying which characteristics can be used to identify systems containing a black hole, especially those for which no mass determination can be made. The first observations of the E greater than 20 keV spectrum of GX 339-4 during a transition between luminosity states are reported here. The hard spectral state is the lower luminosity state of the system. GX 339-4 has a power-low spectrum above 20 keV which pivots during transitions between distinct luminosity states. The only other X-ray sources known to exhibit this behavior, Cyg XR-1 and (probably) A0620-00, are leading candidates for systems containing a black hole component based on their measured spectrocopic mass function.

  2. Mussel-inspired functionalization of electrochemically exfoliated graphene: Based on self-polymerization of dopamine and its suppression effect on the fire hazards and smoke toxicity of thermoplastic polyurethane.

    PubMed

    Cai, Wei; Wang, Junling; Pan, Ying; Guo, Wenwen; Mu, Xiaowei; Feng, Xiaming; Yuan, Bihe; Wang, Xin; Hu, Yuan

    2018-06-15

    The suppression effect of graphene in the fire hazards and smoke toxicity of polymer composites has been seriously limited by both mass production and weak interfacial interaction. Though the electrochemical preparation provides an available approach for mass production, exfoliated graphene could not strongly bond with polar polymer chains. Herein, mussel-inspired functionalization of electrochemically exfoliated graphene was successfully processed and added into polar thermoplastic polyurethane matrix (TPU). As confirmed by SEM patterns of fracture surface, functionalized graphene possessing abundant hydroxyl could constitute a forceful chains interaction with TPU. By the incorporation of 2.0 wt % f-GNS, peak heat release rate (pHRR), total heat release (THR), specific extinction area (SEA), as well as smoke produce rate (SPR) of TPU composites were approximately decreased by 59.4%, 27.1%, 31.9%, and 26.7%, respectively. A probable mechanism of fire retardant was hypothesized: well-dispersed f-GNS constituted tortuous path and hindered the exchange process of degradation product with barrier function. Large quantities of degradation product gathered round f-GNS and reacted with flame retardant to produce the cross-linked and high-degree graphited residual char. The simple functionalization for electrochemically exfoliated graphene impels the application of graphene in the fields of flame retardant composites. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Mass, Radius, and Composition of the Transiting Planet 55 Cnc e: Using Interferometry and Correlations

    NASA Astrophysics Data System (ADS)

    Crida, Aurélien; Ligi, Roxanne; Dorn, Caroline; Lebreton, Yveline

    2018-06-01

    The characterization of exoplanets relies on that of their host star. However, stellar evolution models cannot always be used to derive the mass and radius of individual stars, because many stellar internal parameters are poorly constrained. Here, we use the probability density functions (PDFs) of directly measured parameters to derive the joint PDF of the stellar and planetary mass and radius. Because combining the density and radius of the star is our most reliable way of determining its mass, we find that the stellar (respectively planetary) mass and radius are strongly (respectively moderately) correlated. We then use a generalized Bayesian inference analysis to characterize the possible interiors of 55 Cnc e. We quantify how our ability to constrain the interior improves by accounting for correlation. The information content of the mass–radius correlation is also compared with refractory element abundance constraints. We provide posterior distributions for all interior parameters of interest. Given all available data, we find that the radius of the gaseous envelope is 0.08+/- 0.05{R}p. A stronger correlation between the planetary mass and radius (potentially provided by a better estimate of the transit depth) would significantly improve interior characterization and reduce drastically the uncertainty on the gas envelope properties.

  4. The evolution of trade-offs: geographic variation in call duration and flight ability in the sand cricket, Gryllus firmus.

    PubMed

    Roff, D A; Crnokrak, P; Fairbairn, D J

    2003-07-01

    Quantitative genetic theory assumes that trade-offs are best represented by bivariate normal distributions. This theory predicts that selection will shift the trade-off function itself and not just move the mean trait values along a fixed trade-off line, as is generally assumed in optimality models. As a consequence, quantitative genetic theory predicts that the trade-off function will vary among populations in which at least one of the component traits itself varies. This prediction is tested using the trade-off between call duration and flight capability, as indexed by the mass of the dorsolateral flight muscles, in the macropterous morph of the sand cricket. We use four different populations of crickets that vary in the proportion of macropterous males (Lab = 33%, Florida = 29%, Bermuda = 72%, South Carolina = 80%). We find, as predicted, that there is significant variation in the intercept of the trade-off function but not the slope, supporting the hypothesis that trade-off functions are better represented as bivariate normal distributions rather than single lines. We also test the prediction from a quantitative genetical model of the evolution of wing dimorphism that the mean call duration of macropterous males will increase with the percentage of macropterous males in the population. This prediction is also supported. Finally, we estimate the probability of a macropterous male attracting a female, P, as a function of the relative time spent calling (P = time spent calling by macropterous male/(total time spent calling by both micropterous and macropterous male). We find that in the Lab and Florida populations the probability of a female selecting the macropterous male is equal to P, indicating that preference is due simply to relative call duration. But in the Bermuda and South Carolina populations the probability of a female selecting a macropterous male is less than P, indicating a preference for the micropterous male even after differences in call duration are accounted for.

  5. Bioelectrical impedance analysis (BIA) for sarcopenic obesity (SO) diagnosis in young female subjects

    NASA Astrophysics Data System (ADS)

    González-Correa, C. H.; Caicedo-Eraso, J. C.; S, Villada-Gomez J.

    2013-04-01

    Sarcopenia is defined as a loss of muscle mass depending of ageing and affecting physical function (definition A). A new definition considers excluding mass reduction criterion (definition B). Obesity is pandemic and occurs at all ages. Sarcopenic obesity (SO) implies both processes. The purpose of this study was to compare the results obtained after applying these 2 definitions in 66 aged 22 ± 2.8 years overweight or obese young college women. Percentage body fat (%BF) and skeletal mass index (SMI) were estimated by BIA, muscle function by handgrip strength test (HGS) and physical performance by Harvard step test (HST). There were 9.1% and 90.9% overweight or obese subjects. Twenty nine subjects (43.9%) had decreased HGS and 22 (33.3%) had impaired physical performance. One obese subject (1.5%) met the criteria for sarcopenic obesity by definition A and 9 (13.6%) by definition B. Although a linear regression (α <0.05) showed a very weak association between these variables (r2 = 0.094, 0.037 and 0.275 respectively) it was observed a tendency for HGS, HST and SMI deterioration when %BF increases. However, other confounding factors must be investigated. Probably as the population gets more obese, the problematic of SO will be found earlier in life.

  6. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data

    PubMed Central

    Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford

    2010-01-01

    The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337

  7. Deep and wide photometry of two open clusters NGC 1245 and NGC 2506: dynamical evolution and halo

    NASA Astrophysics Data System (ADS)

    Lee, S. H.; Kang, Y.-W.; Ann, H. B.

    2013-06-01

    We studied the structure of two old open clusters, NGC 1245 and NGC 2506, from a wide and deep VI photometry data acquired using the CFH12K CCD camera at Canada-France-Hawaii Telescope. We devised a new method for assigning cluster membership probability to individual stars using both spatial positions and positions in the colour-magnitude diagram. From analyses of the luminosity functions at several cluster-centric radii and the radial surface density profiles derived from stars with different luminosity ranges, we found that the two clusters are dynamically relaxed to drive significant mass segregation and evaporation of some fraction of low-mass stars. There seems to be a signature of tidal tail in NGC 1245 but the signal is too low to be confirmed.

  8. Extreme magnification of an individual star at redshift 1.5 by a galaxy-cluster lens

    NASA Astrophysics Data System (ADS)

    Kelly, Patrick L.; Diego, Jose M.; Rodney, Steven; Kaiser, Nick; Broadhurst, Tom; Zitrin, Adi; Treu, Tommaso; Pérez-González, Pablo G.; Morishita, Takahiro; Jauzac, Mathilde; Selsing, Jonatan; Oguri, Masamune; Pueyo, Laurent; Ross, Timothy W.; Filippenko, Alexei V.; Smith, Nathan; Hjorth, Jens; Cenko, S. Bradley; Wang, Xin; Howell, D. Andrew; Richard, Johan; Frye, Brenda L.; Jha, Saurabh W.; Foley, Ryan J.; Norman, Colin; Bradac, Marusa; Zheng, Weikang; Brammer, Gabriel; Benito, Alberto Molino; Cava, Antonio; Christensen, Lise; de Mink, Selma E.; Graur, Or; Grillo, Claudio; Kawamata, Ryota; Kneib, Jean-Paul; Matheson, Thomas; McCully, Curtis; Nonino, Mario; Pérez-Fournon, Ismael; Riess, Adam G.; Rosati, Piero; Schmidt, Kasper Borello; Sharon, Keren; Weiner, Benjamin J.

    2018-04-01

    Galaxy-cluster gravitational lenses can magnify background galaxies by a total factor of up to 50. Here we report an image of an individual star at redshift z = 1.49 (dubbed MACS J1149 Lensed Star 1) magnified by more than ×2,000. A separate image, detected briefly 0.26″ from Lensed Star 1, is probably a counterimage of the first star demagnified for multiple years by an object of ≳3 solar masses in the cluster. For reasonable assumptions about the lensing system, microlensing fluctuations in the stars' light curves can yield evidence about the mass function of intracluster stars and compact objects, including binary fractions and specific stellar evolution and supernova models. Dark-matter subhaloes or massive compact objects may help to account for the two images' long-term brightness ratio.

  9. In-beam Fission Study at JAEA

    NASA Astrophysics Data System (ADS)

    Nishio, Katsuhisa

    2013-12-01

    Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and quasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.

  10. In-beam fission study for Heavy Element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, Katsuhisa

    2013-12-01

    Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.

  11. An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses

    NASA Technical Reports Server (NTRS)

    Lee, Man Hoi; Spergel, David N.

    1990-01-01

    The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.

  12. Fission and quasifission of composite systems with Z =108 -120 : Transition from heavy-ion reactions involving S and Ca to Ti and Ni ions

    NASA Astrophysics Data System (ADS)

    Kozulin, E. M.; Knyazheva, G. N.; Novikov, K. V.; Itkis, I. M.; Itkis, M. G.; Dmitriev, S. N.; Oganessian, Yu. Ts.; Bogachev, A. A.; Kozulina, N. I.; Harca, I.; Trzaska, W. H.; Ghosh, T. K.

    2016-11-01

    Background: Suppression of compound nucleus formation in the reactions with heavy ions by a quasifission process in dependence on the reaction entrance channel. Purpose: Investigation of fission and quasifission processes in the reactions 36S,48Ca,48Ti , and 64Ni+238U at energies around the Coulomb barrier. Methods: Mass-energy distributions of fissionlike fragments formed in the reaction 48Ti+238U at energies of 247, 258, and 271 MeV have been measured using the double-arm time-of-flight spectrometer CORSET at the U400 cyclotron of the Flerov Laboratory of Nuclear Reactions and compared with mass-energy distributions for the reactions 36S,48Ca,64Ni+238U . Results: The most probable fragment masses as well as total kinetic energies and their dispersions in dependence on the interaction energies have been investigated for asymmetric and symmetric fragments for the studied reactions. The fusion probabilities have been deduced from the analysis of mass-energy distributions. Conclusion: The estimated fusion probability for the reactions S, Ca, Ti, and Ni ions with actinide nuclei shows that it depends exponentially on the mean fissility parameter of the system. For the reactions with actinide nuclei leading to the formation of superheavy elements the fusion probabilities are of several orders of magnitude higher than in the case of cold fusion reactions.

  13. A mass reconstruction technique for a heavy resonance decaying to τ + τ -

    NASA Astrophysics Data System (ADS)

    Xia, Li-Gang

    2016-11-01

    For a resonance decaying to τ + τ -, it is difficult to reconstruct its mass accurately because of the presence of neutrinos in the decay products of the τ leptons. If the resonance is heavy enough, we show that its mass can be well determined by the momentum component of the τ decay products perpendicular to the velocity of the τ lepton, p ⊥, and the mass of the visible/invisible decay products, m vis/inv, for τ decaying to hadrons/leptons. By sampling all kinematically allowed values of p ⊥ and m vis/inv according to their joint probability distributions determined by the MC simulations, the mass of the mother resonance is assumed to lie at the position with the maximal probability. Since p ⊥ and m vis/inv are invariant under the boost in the τ lepton direction, the joint probability distributions are independent upon the τ’s origin. Thus this technique is able to determine the mass of an unknown resonance with no efficiency loss. It is tested using MC simulations of the physics processes pp → Z/h(125)/h(750) + X → ττ + X at 13 TeV. The ratio of the full width at half maximum and the peak value of the reconstructed mass distribution is found to be 20%-40% using the information of missing transverse energy. Supported by General Financial Grant from the China Postdoctoral Science Foundation (2015M581062)

  14. Fathers matter: male body mass affects life-history traits in a size-dimorphic seabird

    PubMed Central

    Jenouvrier, Stéphanie; Börger, Luca; Weimerskirch, Henri; Ozgul, Arpat

    2017-01-01

    One of the predicted consequences of climate change is a shift in body mass distributions within animal populations. Yet body mass, an important component of the physiological state of an organism, can affect key life-history traits and consequently population dynamics. Over the past decades, the wandering albatross—a pelagic seabird providing bi-parental care with marked sexual size dimorphism—has exhibited an increase in average body mass and breeding success in parallel with experiencing increasing wind speeds. To assess the impact of these changes, we examined how body mass affects five key life-history traits at the individual level: adult survival, breeding probability, breeding success, chick mass and juvenile survival. We found that male mass impacted all traits examined except breeding probability, whereas female mass affected none. Adult male survival increased with increasing mass. Increasing adult male mass increased breeding success and mass of sons but not of daughters. Juvenile male survival increased with their chick mass. These results suggest that a higher investment in sons by fathers can increase their inclusive fitness, which is not the case for daughters. Our study highlights sex-specific differences in the effect of body mass on the life history of a monogamous species with bi-parental care. PMID:28469021

  15. [Weight loss in overweight or obese patients and family functioning].

    PubMed

    Jaramillo-Sánchez, Rosalba; Espinosa-de Santillana, Irene; Espíndola-Jaramillo, Ilia Angélica

    2012-01-01

    to determine the association between weight loss and family functioning. a cohort of 168 persons with overweight or obesity from 20-49 years, either sex, with no comorbidity was studied at the nutrition department. A sociodemographic data was obtained and FACES III instrument to measure family functioning was applied. At the third month a new assessment of the body mass index was measured. Descriptive statistical analysis and relative risk were done. obesity presented in 50.6 %, 59.53 % of them did not lose weight. Family dysfunction was present in 56.6 % of which 50 % did not lose weight. From 43.4 % of functional families, 9.52 % did not lose weight (p = 0.001). The probability or risk of not losing weight was to belong to a dysfunctional family is 4.03 % (CI = 2.60-6.25). A significant association was found between the variables: weight loss and family functioning. Belonging to a dysfunctional family may be a risk factor for not losing weight.

  16. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE PAGES

    McDonnell, J. D.; Schunck, N.; Higdon, D.; ...

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  17. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonnell, J. D.; Schunck, N.; Higdon, D.

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  18. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  19. Improved Membership Probability for Moving Groups: Bayesian and Machine Learning Approaches

    NASA Astrophysics Data System (ADS)

    Lee, Jinhee; Song, Inseok

    2018-01-01

    Gravitationally unbound loose stellar associations (i.e., young nearby moving groups: moving groups hereafter) have been intensively explored because they are important in planet and disk formation studies, exoplanet imaging, and age calibration. Among the many efforts devoted to the search for moving group members, a Bayesian approach (e.g.,using the code BANYAN) has become popular recently because of the many advantages it offers. However, the resultant membership probability needs to be carefully adopted because of its sensitive dependence on input models. In this study, we have developed an improved membership calculation tool focusing on the beta-Pic moving group. We made three improvements for building models used in BANYAN II: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZUVW. Our improved tool can change membership probability up to 70%. Membership probability is critical and must be better defined. For example, our code identifies only one third of the candidate members in SIMBAD that are believed to be kinematically associated with beta-Pic moving group.Additionally, we performed cluster analysis of young nearby stars using an unsupervised machine learning approach. As more moving groups and their members are identified, the complexity and ambiguity in moving group configuration has been increased. To clarify this issue, we analyzed ~4,000 X-ray bright young stellar candidates. Here, we present the preliminary results. By re-identifying moving groups with the least human intervention, we expect to understand the composition of the solar neighborhood. Moreover better defined moving group membership will help us understand star formation and evolution in relatively low density environments; especially for the low-mass stars which will be identified in the coming Gaia release.

  20. Bipolar nebulae and mass loss from red giant stars

    NASA Technical Reports Server (NTRS)

    Cohen, M.

    1985-01-01

    Observations of several bipolar nebulae are used to learn something of the nature of mass loss from the probable red-giant progenitors of these nebulae. Phenomena discussed are: (1) probable GL 2688's optical molecular emissions; (2) newly discovered very high velocity knots along the axis of OH 0739 - 14, which reveal evidence for mass ejections of + or 300 km/s from the M9 III star embedded in this nebula; (3) the bipolar structure of three extreme carbon stars, and the evidence for periodic mass ejection in IRC + 30219, also at high speed (about 80 km/s); and (4) the curious cool TiO-rich region above Parsamian 13, which may represent the very recent shedding of photospheric material from a cool, oxygen-rich giant. Several general key questions about bipolar nebulae that relate to the process of mass loss from their progenitor stars are raised.

  1. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  2. Studies of the chemistry of vibrationally and electronically excited species in planetary upper atmospheres

    NASA Technical Reports Server (NTRS)

    Fox, J. L.

    1984-01-01

    The vibrational distribution of O2(+) in the atmospheres of Venus and Mars was investigated to compare with analogous values in the Earth's atmosphere. The dipole moment of the Z(2) Pi sub u - X(2) Pi sub g transition of O2(+) is calculated as a function of internuclear distance. The band absorption oscillator strengths and band transition probabilities of the second negative system are derived. The vibrational distribution of O2(+) in the ionosphere of Venus is calculated for a model based on data from the Pioneer Venus neutral mass spectrometer.

  3. First Principles Based Reactive Atomistic Simulations to Understand the Effects of Molecular Hypervelocity Impact on Cassini's Ion and Neutral Mass Spectrometer

    NASA Technical Reports Server (NTRS)

    Jaramillo-Botero, A.; Cheng, M-J; Cvicek, V.; Beegle, Luther W.; Hodyss, R.; Goddard, W. A., III

    2011-01-01

    We report here on the predicted impact of species such as ice-water, CO2, CH4, and NH3, on oxidized titanium, as well as HC species on diamond surfaces. These simulations provide the dynamics of product distributions during and after a hypervelocity impact event, ionization fractions, and dissociation probabilities for the various species of interest as a function of impact velocity (energy). We are using these results to determine the relevance of the fragmentation process to Cassini INMS results, and to quantify its effects on the observed spectra.

  4. Progress in the development of PDF turbulence models for combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    A combined Monte Carlo-computational fluid dynamic (CFD) algorithm was developed recently at Lewis Research Center (LeRC) for turbulent reacting flows. In this algorithm, conventional CFD schemes are employed to obtain the velocity field and other velocity related turbulent quantities, and a Monte Carlo scheme is used to solve the evolution equation for the probability density function (pdf) of species mass fraction and temperature. In combustion computations, the predictions of chemical reaction rates (the source terms in the species conservation equation) are poor if conventional turbulence modles are used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature produces excessively large errors. Moment closure models for the source terms have attained only limited success. The probability density function (pdf) method seems to be the only alternative at the present time that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus may be the only viable approach for more accurate turbulent combustion calculations. Assumed pdf's are useful in simple problems; however, for more general combustion problems, the solution of an evolution equation for the pdf is necessary.

  5. Hypervelocity Impact Test Fragment Modeling: Modifications to the Fragment Rotation Analysis and Lightcurve Code

    NASA Technical Reports Server (NTRS)

    Gouge, Michael F.

    2011-01-01

    Hypervelocity impact tests on test satellites are performed by members of the orbital debris scientific community in order to understand and typify the on-orbit collision breakup process. By analysis of these test satellite fragments, the fragment size and mass distributions are derived and incorporated into various orbital debris models. These same fragments are currently being put to new use using emerging technologies. Digital models of these fragments are created using a laser scanner. A group of computer programs referred to as the Fragment Rotation Analysis and Lightcurve code uses these digital representations in a multitude of ways that describe, measure, and model on-orbit fragments and fragment behavior. The Dynamic Rotation subroutine generates all of the possible reflected intensities from a scanned fragment as if it were observed to rotate dynamically while in orbit about the Earth. This calls an additional subroutine that graphically displays the intensities and the resulting frequency of those intensities as a range of solar phase angles in a Probability Density Function plot. This document reports the additions and modifications to the subset of the Fragment Rotation Analysis and Lightcurve concerned with the Dynamic Rotation and Probability Density Function plotting subroutines.

  6. Bayesian assessment of moving group membership: importance of models and prior knowledge

    NASA Astrophysics Data System (ADS)

    Lee, Jinhee; Song, Inseok

    2018-04-01

    Young nearby moving groups are important and useful in many fields of astronomy such as studying exoplanets, low-mass stars, and the stellar evolution of the early planetary systems over tens of millions of years, which has led to intensive searches for their members. Identification of members depends on the used models sensitively; therefore, careful examination of the models is required. In this study, we investigate the effects of the models used in moving group membership calculations based on a Bayesian framework (e.g. BANYAN II) focusing on the beta-Pictoris moving group (BPMG). Three improvements for building models are suggested: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZ and UVW. The effect of each change is investigated, and we suggest using all of these improvements simultaneously in future membership probability calculations. Using this improved MG membership calculation and the careful examination of the age, 57 bona fide members of BPMG are confirmed including 12 new members. We additionally suggest 17 highly probable members.

  7. Pulmonary function studies in young healthy Malaysians of Kelantan, Malaysia.

    PubMed

    Bandyopadhyay, Amit

    2011-11-01

    Pulmonary function tests have been evolved as clinical tools in diagnosis, management and follow up of respiratory diseases as it provides objective information about the status of an individual's respiratory system. The present study was aimed to evaluate pulmonary function among the male and female young Kelantanese Malaysians of Kota Bharu, Malaysia, and to compare the data with other populations. A total of 128 (64 males, 64 females) non-smoking healthy young subjects were randomly sampled for the study from the Kelantanese students' population of the University Sains Malaysia, Kota Bharu Campus, Kelantan, Malaysia. The study population (20-25 yr age group) had similar socio-economic background. Each subject filled up the ATS (1978) questionnaire to record their personal demographic data, health status and consent to participate in the study. Subjects with any history of pulmonary diseases were excluded from the study. The pulmonary function measurements exhibited significantly higher values among males than the females. FEV 1% did not show any significant inter-group variation probably because the parameter expresses FEV 1 as a percentage of FVC. FVC and FEV 1 exhibited significant correlations with body height and body mass among males whereas in the females exhibited significant correlation with body mass, body weight and also with age. FEV 1% exhibited significant correlation with body height and body mass among males and with body height in females. FEF 25-75% did not show any significant correlation except with body height among females. However, PEFR exhibited significant positive correlation with all the physical parameters except with age among the females. On the basis of the existence of significant correlation between different physical parameters and pulmonary function variables, simple and multiple regression norms have been computed. From the present investigation it can be concluded that Kelantanese Malaysian youths have normal range of pulmonary function in both the sexes and the computed regression norms may be used to predict the pulmonary function values in the studied population.

  8. Pulmonary function studies in young healthy Malaysians of Kelantan, Malaysia

    PubMed Central

    Bandyopadhyay, Amit

    2011-01-01

    Background & objectives: Pulmonary function tests have been evolved as clinical tools in diagnosis, management and follow up of respiratory diseases as it provides objective information about the status of an individual's respiratory system. The present study was aimed to evaluate pulmonary function among the male and female young Kelantanese Malaysians of Kota Bharu, Malaysia, and to compare the data with other populations. Methods: A total of 128 (64 males, 64 females) non-smoking healthy young subjects were randomly sampled for the study from the Kelantanese students’ population of the University Sains Malaysia, Kota Bharu Campus, Kelantan, Malaysia. The study population (20-25 yr age group) had similar socio-economic background. Each subject filled up the ATS (1978) questionnaire to record their personal demographic data, health status and consent to participate in the study. Subjects with any history of pulmonary diseases were excluded from the study. Results: The pulmonary function measurements exhibited significantly higher values among males than the females. FEV1% did not show any significant inter-group variation probably because the parameter expresses FEV1 as a percentage of FVC. FVC and FEV1 exhibited significant correlations with body height and body mass among males whereas in the females exhibited significant correlation with body mass, body weight and also with age. FEV1% exhibited significant correlation with body height and body mass among males and with body height in females. FEF25-75% did not show any significant correlation except with body height among females. However, PEFR exhibited significant positive correlation with all the physical parameters except with age among the females. On the basis of the existence of significant correlation between different physical parameters and pulmonary function variables, simple and multiple regression norms have been computed. Interpretation & conclusions: From the present investigation it can be concluded that Kelantanese Malaysian youths have normal range of pulmonary function in both the sexes and the computed regression norms may be used to predict the pulmonary function values in the studied population. PMID:22199104

  9. Galactic Stellar and Substellar Initial Mass Function

    NASA Astrophysics Data System (ADS)

    Chabrier, Gilles

    2003-07-01

    We review recent determinations of the present-day mass function (PDMF) and initial mass function (IMF) in various components of the Galaxy-disk, spheroid, young, and globular clusters-and in conditions characteristic of early star formation. As a general feature, the IMF is found to depend weakly on the environment and to be well described by a power-law form for m>~1 Msolar and a lognormal form below, except possibly for early star formation conditions. The disk IMF for single objects has a characteristic mass around mc~0.08 Msolar and a variance in logarithmic mass σ~0.7, whereas the IMF for multiple systems has mc~0.2 Msolar and σ~0.6. The extension of the single MF into the brown dwarf regime is in good agreement with present estimates of L- and T-dwarf densities and yields a disk brown dwarf number density comparable to the stellar one, nBD~n*~0.1 pc-3. The IMF of young clusters is found to be consistent with the disk field IMF, providing the same correction for unresolved binaries, confirming the fact that young star clusters and disk field stars represent the same stellar population. Dynamical effects, yielding depletion of the lowest mass objects, are found to become consequential for ages >~130 Myr. The spheroid IMF relies on much less robust grounds. The large metallicity spread in the local subdwarf photometric sample, in particular, remains puzzling. Recent observations suggest that there is a continuous kinematic shear between the thick-disk population, present in local samples, and the genuine spheroid one. This enables us to derive only an upper limit for the spheroid mass density and IMF. Within all the uncertainties, the latter is found to be similar to the one derived for globular clusters and is well represented also by a lognormal form with a characteristic mass slightly larger than for the disk, mc~0.2-0.3 Msolar, excluding a significant population of brown dwarfs in globular clusters and in the spheroid. The IMF characteristic of early star formation at large redshift remains undetermined, but different observational constraints suggest that it does not extend below ~1 Msolar. These results suggest a characteristic mass for star formation that decreases with time, from conditions prevailing at large redshift to conditions characteristic of the spheroid (or thick disk) to present-day conditions. These conclusions, however, remain speculative, given the large uncertainties in the spheroid and early star IMF determinations. These IMFs allow a reasonably robust determination of the Galactic present-day and initial stellar and brown dwarf contents. They also have important galactic implications beyond the Milky Way in yielding more accurate mass-to-light ratio determinations. The mass-to-light ratios obtained with the disk and the spheroid IMF yield values 1.8-1.4 times smaller than for a Salpeter IMF, respectively, in agreement with various recent dynamical determinations. This general IMF determination is examined in the context of star formation theory. None of the theories based on a Jeans-type mechanism, where fragmentation is due only to gravity, can fulfill all the observational constraints on star formation and predict a large number of substellar objects. On the other hand, recent numerical simulations of compressible turbulence, in particular in super-Alfvénic conditions, seem to reproduce both qualitatively and quantitatively the stellar and substellar IMF and thus provide an appealing theoretical foundation. In this picture, star formation is induced by the dissipation of large-scale turbulence to smaller scales through radiative MHD shocks, producing filamentary structures. These shocks produce local nonequilibrium structures with large density contrasts, which collapse eventually in gravitationally bound objects under the combined influence of turbulence and gravity. The concept of a single Jeans mass is replaced by a distribution of local Jeans masses, representative of the lognormal probability density function of the turbulent gas. Objects below the mean thermal Jeans mass still have a possibility to collapse, although with a decreasing probability. The page charges for this Review were partially covered by a generous gift from a PASP supporter.

  10. A rare case of multiple schwannomas presenting with scrotal mass: a probable case of schwannomatosis.

    PubMed

    Ikari, Ryo; Okamoto, Keisei; Yoshida, Tetsuya; Johnin, Kazuyoshi; Okabe, Hidetoshi; Okada, Yusaku

    2010-08-01

    We report a rare case of multiple schwannomas presenting with scrotal mass. In the present case, a scrotal schwannoma developed in a 66-year-old man with a history of brain tumor surgery. Investigating the patient's past history lead to the diagnosis as probable schwannomatosis. Patients with schwannomatosis are at increased risk of developing multiple schwannomas and these patients need regular surveillance. In this regard, the present case highlights the importance of thorough history taking in patients with scrotal schwannoma.

  11. An observational estimate of the probability of encounters between mass-losing evolved stars and molecular clouds

    NASA Astrophysics Data System (ADS)

    Kastner, Joel H.; Myers, P. C.

    1994-02-01

    One hypothesis for the elevated abundance of Al-26 present during the formation of the solar system is that an asymptotic giant branch (AGB) star expired within the molecular cloud (MC) containing the protosolar nebula. To test this hypothesis for star-forming clouds at the present epoch, we compared nearly complete lists of rapidly mass-losing AGB stars and MCs in the solar neighborhood and identified those stars which are most likely to encounter a nearby cloud. Roughly 10 stars satisfy our selection criteria. We estimated probabilities of encounter for these stars from the position of each star relative to cloud CO emission and the likely star-cloud distance along the line of sight. Typical encounter probabilities are approximately 1%. The number of potential encounters and the probability for each star-cloud pair to result in an encounter suggests that within 1 kpc of the Sun, there is a approximately 1% chance that a given cloud will be visited by a mass-losing AGB star over the next million years. The estimate is dominated by the possibility of encounters involving the stars IRC +60041 and S Cep. Over a MC lifetime, the probability for AGB encounter may be as high as approximately 70%. We discuss the implications of these results for theories of AL-26 enrichment of processed and unprocessed meteoritic inclusions. If the Al-26 in either type of inclusion arose from AGB-MC interaction, the low probability estimated here seems to require that AGB-MC encounters trigger multiple star formation and/or that the production rate of AGB stars was higher during the epoch of solar system formation than at present. Various lines of evidence suggest only the more massive (5-8 solar mass) AGB stars can produce significant AL-26 enrichment of star-forming clouds.

  12. Computation of the Complex Probability Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia Jo; Ledwith, Patrick John

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n th degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  13. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  14. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  15. Wireless cellular networks with Pareto-distributed call holding times

    NASA Astrophysics Data System (ADS)

    Rodriguez-Dagnino, Ramon M.; Takagi, Hideaki

    2001-07-01

    Nowadays, there is a growing interest in providing internet to mobile users. For instance, NTT DoCoMo in Japan deploys an important mobile phone network with that offers the Internet service, named 'i-mode', to more than 17 million subscribers. Internet traffic measurements show that the session duration of Call Holding Time (CHT) has probability distributions with heavy-tails, which tells us that they depart significantly from the traffic statistics of traditional voice services. In this environment, it is particularly important to know the number of handovers during a call for a network designer to make an appropriate dimensioning of virtual circuits for a wireless cell. The handover traffic has a direct impact on the Quality of Service (QoS); e.g. the service disruption due to the handover failure may significantly degrade the specified QoS of time-constrained services. In this paper, we first study the random behavior of the number of handovers during a call, where we assume that the CHT are Pareto distributed (heavy-tail distribution), and the Cell Residence Times (CRT) are exponentially distributed. Our approach is based on renewal theory arguments. We present closed-form formulae for the probability mass function (pmf) of the number of handovers during a Pareto distributed CHT, and obtain the probability of call completion as well as handover rates. Most of the formulae are expressed in terms of the Whittaker's function. We compare the Pareto case with cases of $k(subscript Erlang and hyperexponential distributions for the CHT.

  16. Density Weighted FDF Equations for Simulations of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2011-01-01

    In this report, we briefly revisit the formulation of density weighted filtered density function (DW-FDF) for large eddy simulation (LES) of turbulent reacting flows, which was proposed by Jaberi et al. (Jaberi, F.A., Colucci, P.J., James, S., Givi, P. and Pope, S.B., Filtered mass density function for Large-eddy simulation of turbulent reacting flows, J. Fluid Mech., vol. 401, pp. 85-121, 1999). At first, we proceed the traditional derivation of the DW-FDF equations by using the fine grained probability density function (FG-PDF), then we explore another way of constructing the DW-FDF equations by starting directly from the compressible Navier-Stokes equations. We observe that the terms which are unclosed in the traditional DW-FDF equations are now closed in the newly constructed DW-FDF equations. This significant difference and its practical impact on the computational simulations may deserve further studies.

  17. Approved Methods and Algorithms for DoD Risk-Based Explosives Siting

    DTIC Science & Technology

    2009-07-21

    Parameter used in determining probability of hit ( Phit ) by debris. [Table 31, Table 32, Table 33, Eq. (157), Eq. (158)] CCa Variable “Actual...being in the glass hazard area”. [Eq. (60), Eq. (78)] Phit Variable “Probability of hit”. An array value indexed by consequence and mass bin...Eq. (156), Eq. (157)] Phit (f) Variable “Probability of hit for fatality”. [Eq. (157), Eq. (158)] Phit (maji) Variable “Probability of hit for major

  18. We are not the 99 percent: quantifying asphericity in the distribution of Local Group satellites

    NASA Astrophysics Data System (ADS)

    Forero-Romero, Jaime E.; Arias, Verónica

    2018-05-01

    We use simulations to build an analytic probability distribution for the asphericity in the satellite distribution around Local Group (LG) type galaxies in the Lambda Cold Dark Matter (LCDM) paradigm. We use this distribution to estimate the atypicality of the satellite distributions in the LG even when the underlying simulations do not have enough systems fully resembling the LG in terms of its typical masses, separation and kinematics. We demonstrate the method using three different simulations (Illustris-1, Illustris-1-Dark and ELVIS) and a number of satellites ranging from 11 to 15. Detailed results differ greatly among the simulations suggesting a strong influence of the typical DM halo mass, the number of satellites and the simulated baryonic effects. However, there are three common trends. First, at most 2% of the pairs are expected to have satellite distributions with the same asphericity as the LG; second, at most 80% of the pairs have a halo with a satellite distribution as aspherical as in M31; and third, at most 4% of the pairs have a halo with satellite distribution as planar as in the MW. These quantitative results place the LG at the level of a 3σ outlier in the LCDM paradigm. We suggest that understanding the reasons for this atypicality requires quantifying the asphericity probability distribution as a function of halo mass and large scale environment. The approach presented here can facilitate that kind of study and other comparisons between different numerical setups and choices to study satellites around LG pairs in simulations.

  19. 30+ New & Known SB2s in the SDSS-III/APOGEE M Dwarf Ancillary Science Project Sample

    NASA Astrophysics Data System (ADS)

    Skinner, Jacob; Covey, Kevin; Bender, Chad; De Lee, Nathan Michael; Chojnowski, Drew; Troup, Nicholas; Badenes, Carles; Mahadevan, Suvrath; Terrien, Ryan

    2018-01-01

    Close stellar binaries can drive dynamical interactions that affect the structure and evolution of planetary systems. Binary surveys indicate that the multiplicity fraction and typical orbital separation decrease with primary mass, but correlations with higher order architectural parameters such as the system's mass ratio are less well constrained. We seek to identify and characterize double-lined spectroscopic binaries (SB2s) among the 1350 M dwarf ancillary science targets with APOGEE spectra in the SDSS-III Data Release 13. We quantitatively measure the degree of asymmetry in the APOGEE pipeline cross-correlation functions (CCFs), and use those metrics to identify a sample of 44 high-likelihood candidate SB2s. Extracting radial velocities (RVs) for both binary components from the CCF, we then measure mass ratios for 31 SB2s; we also use Bayesian techniques to fit orbits for 4 systems with 8 or more distinct APOGEE observations. The (incomplete) mass ratio distribution of this sample rises quickly towards unity. Two-sided Kolmogorov-Smirnov (K-S) tests find probabilities of 13.8% and 14.2% that the M dwarf mass ratio distribution is consistent with those measured by Pourbaix et al. (2004) and Fernandez et al. (2017), respectively. The samples analyzed by Pourbaix et al. and Fernandez et al. are dominated by higher-mass solar type stars; this suggests that the mass ratio distribution of close binaries is not strongly dependent on primary mass.

  20. Measurement of the Top Quark Mass Simultaneously in Dilepton and Lepton + Jets Decay Channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedorko, Wojciech T.

    2008-12-01

    The authors present the first measurement of the top quark mass using simultaneously data from two decay channels. They use a data sample of √s = 1.96 TeV collisions with integrated luminosity of 1.9 fb -1 collected by the CDF II detector. They select dilepton and lepton + jets channel decays of tmore » $$\\bar{t}$$ pairs and reconstruct two observables in each topology. They use non-parametric techniques to derive probability density functions from simulated signal and background samples. The observables are the reconstructed top quark mass and the scalar sum of transverse energy of the event in the dilepton topology and the reconstructed top quark mass and the invariant mass of jets from the W boson decay in lepton + jets channel. They perform a simultaneous fit for the top quark mass and the jet energy scale which is constrained in situ by the hadronic W boson resonance from the lepton + jets channel. Using 144 dilepton candidate events and 332 lepton + jets candidate events they measure: M top = 171.9 ± 1.7 (stat. + JES) ± 1.1 (other sys.) GeV/c 2 = 171.9 ± 2.0 GeV/c 2. The measurement features a robust treatment of the systematic uncertainties, correlated between the two channels and develops techniques for a future top quark mass measurement simultaneously in all decay channels. Measurements of the W boson mass and the top quark mass provide a constraint on the mass of the yet unobserved Higgs boson. The Higgs boson mass implied by measurement presented here is higher than Higgs boson mass implied by previously published, most precise CDF measurements of the top quark mass in lepton + jets and dilepton channels separately.« less

  1. THE CLUSTERING OF ALFALFA GALAXIES: DEPENDENCE ON H I MASS, RELATIONSHIP WITH OPTICAL SAMPLES, AND CLUES OF HOST HALO PROPERTIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papastergis, Emmanouil; Giovanelli, Riccardo; Haynes, Martha P.

    We use a sample of ≈6000 galaxies detected by the Arecibo Legacy Fast ALFA (ALFALFA) 21 cm survey to measure the clustering properties of H I-selected galaxies. We find no convincing evidence for a dependence of clustering on galactic atomic hydrogen (H I) mass, over the range M{sub H{sub I}} ≈ 10{sup 8.5}-10{sup 10.5} M{sub ☉}. We show that previously reported results of weaker clustering for low H I mass galaxies are probably due to finite-volume effects. In addition, we compare the clustering of ALFALFA galaxies with optically selected samples drawn from the Sloan Digital Sky Survey (SDSS). We findmore » that H I-selected galaxies cluster more weakly than even relatively optically faint galaxies, when no color selection is applied. Conversely, when SDSS galaxies are split based on their color, we find that the correlation function of blue optical galaxies is practically indistinguishable from that of H I-selected galaxies. At the same time, SDSS galaxies with red colors are found to cluster significantly more than H I-selected galaxies, a fact that is evident in both the projected as well as the full two-dimensional correlation function. A cross-correlation analysis further reveals that gas-rich galaxies 'avoid' being located within ≈3 Mpc of optical galaxies with red colors. Next, we consider the clustering properties of halo samples selected from the Bolshoi ΛCDM simulation. A comparison with the clustering of ALFALFA galaxies suggests that galactic H I mass is not tightly related to host halo mass and that a sizable fraction of subhalos do not host H I galaxies. Lastly, we find that we can recover fairly well the correlation function of H I galaxies by just excluding halos with low spin parameter. This finding lends support to the hypothesis that halo spin plays a key role in determining the gas content of galaxies.« less

  2. Performance of shear wave elastography for differentiation of benign and malignant solid breast masses.

    PubMed

    Li, Guiling; Li, De-Wei; Fang, Yu-Xiao; Song, Yi-Jiang; Deng, Zhu-Jun; Gao, Jian; Xie, Yan; Yin, Tian-Sheng; Ying, Li; Tang, Kai-Fu

    2013-01-01

    To perform a meta-analysis assessing the ability of shear wave elastography (SWE) to identify malignant breast masses. PubMed, the Cochrane Library, and the ISI Web of Knowledge were searched for studies evaluating the accuracy of SWE for identifying malignant breast masses. The diagnostic accuracy of SWE was evaluated according to sensitivity, specificity, and hierarchical summary receiver operating characteristic (HSROC) curves. An analysis was also performed according to the SWE mode used: supersonic shear imaging (SSI) and the acoustic radiation force impulse (ARFI) technique. The clinical utility of SWE for identifying malignant breast masses was evaluated using analysis of Fagan plot. A total of 9 studies, including 1888 women and 2000 breast masses, were analyzed. Summary sensitivities and specificities were 0.91 (95% confidence interval [CI], 0.88-0.94) and 0.82 (95% CI, 0.75-0.87) by SSI and 0.89 (95% CI, 0.81-0.94) and 0.91 (95% CI, 0.84-0.95) by ARFI, respectively. The HSROCs for SSI and ARFI were 0.92 (95% CI, 0.90-0.94) and 0.96 (95% CI, 0.93-0.97), respectively. SSI and ARFI were both very informative, with probabilities of 83% and 91%, respectively, for correctly differentiating between benign and malignant breast masses following a "positive" measurement (over the threshold value) and probabilities of disease as low as 10% and 11%, respectively, following a "negative" measurement (below the threshold value) when the pre-test probability was 50%. SWE could be used as a good identification tool for the classification of breast masses.

  3. Conditions for Optimal Growth of Black Hole Seeds

    NASA Astrophysics Data System (ADS)

    Pacucci, Fabio; Natarajan, Priyamvada; Volonteri, Marta; Cappelluti, Nico; Urry, C. Megan

    2017-12-01

    Supermassive black holes weighing up to ˜109 M ⊙ are in place by z ˜ 7, when the age of the universe is ≲1 Gyr. This implies a time crunch for their growth, since such high masses cannot be easily reached in standard accretion scenarios. Here, we explore the physical conditions that would lead to optimal growth wherein stable super-Eddington accretion would be permitted. Our analysis suggests that the preponderance of optimal conditions depends on two key parameters: the black hole mass and the host galaxy central gas density. In the high-efficiency region of this parameter space, a continuous stream of gas can accrete onto the black hole from large to small spatial scales, assuming a global isothermal profile for the host galaxy. Using analytical initial mass functions for black hole seeds, we find an enhanced probability of high-efficiency growth for seeds with initial masses ≳104 M ⊙. Our picture suggests that a large population of high-z lower-mass black holes that formed in the low-efficiency region, with low duty cycles and accretion rates, might remain undetectable as quasars, since we predict their bolometric luminosities to be ≲1041 erg s-1. The presence of these sources might be revealed only via gravitational wave detections of their mergers.

  4. A poisson process model for hip fracture risk.

    PubMed

    Schechner, Zvi; Luo, Gangming; Kaufman, Jonathan J; Siffert, Robert S

    2010-08-01

    The primary method for assessing fracture risk in osteoporosis relies primarily on measurement of bone mass. Estimation of fracture risk is most often evaluated using logistic or proportional hazards models. Notwithstanding the success of these models, there is still much uncertainty as to who will or will not suffer a fracture. This has led to a search for other components besides mass that affect bone strength. The purpose of this paper is to introduce a new mechanistic stochastic model that characterizes the risk of hip fracture in an individual. A Poisson process is used to model the occurrence of falls, which are assumed to occur at a rate, lambda. The load induced by a fall is assumed to be a random variable that has a Weibull probability distribution. The combination of falls together with loads leads to a compound Poisson process. By retaining only those occurrences of the compound Poisson process that result in a hip fracture, a thinned Poisson process is defined that itself is a Poisson process. The fall rate is modeled as an affine function of age, and hip strength is modeled as a power law function of bone mineral density (BMD). The risk of hip fracture can then be computed as a function of age and BMD. By extending the analysis to a Bayesian framework, the conditional densities of BMD given a prior fracture and no prior fracture can be computed and shown to be consistent with clinical observations. In addition, the conditional probabilities of fracture given a prior fracture and no prior fracture can also be computed, and also demonstrate results similar to clinical data. The model elucidates the fact that the hip fracture process is inherently random and improvements in hip strength estimation over and above that provided by BMD operate in a highly "noisy" environment and may therefore have little ability to impact clinical practice.

  5. ALMA Detects CO(3-2) within a Super Star Cluster in NGC 5253

    NASA Astrophysics Data System (ADS)

    Turner, Jean L.; Consiglio, S. Michelle; Beck, Sara C.; Goss, W. M.; Ho, Paul. T. P.; Meier, David S.; Silich, Sergiy; Zhao, Jun-Hui

    2017-09-01

    We present observations of CO(3-2) and 13CO(3-2) emission near the supernebula in the dwarf galaxy NGC 5253, which contains one of the best examples of a potential globular cluster in formation. The 0.″3 resolution images reveal an unusual molecular cloud, “Cloud D1,” that is coincident with the radio-infrared supernebula. The ˜6 pc diameter cloud has a linewidth, Δ v = 21.7 {km} {{{s}}}-1, that reflects only the gravitational potential of the star cluster residing within it. The corresponding virial mass is 2.5 × 105 {M}⊙ . The cluster appears to have a top-heavy initial mass function, with M * ≳ 1-2 {M}⊙ . Cloud D1 is optically thin in CO(3-2), probably because the gas is hot. Molecular gas mass is very uncertain but constitutes <35% of the dynamical mass within the cloud boundaries. In spite of the presence of an estimated ˜1500-2000 O stars within the small cloud, the CO appears relatively undisturbed. We propose that Cloud D1 consists of molecular clumps or cores, possibly star-forming, orbiting with more evolved stars in the core of the giant cluster.

  6. Negative Ion Chemistry in the Coma of Comet 1P/Halley

    NASA Technical Reports Server (NTRS)

    Cordiner, M. A.; Charnley, S. B.

    2012-01-01

    Negative ions (anions) were identified in the coma of comet 1P/Halley from in-situ measurements performed by the Giotto spacecraft in 1986. These anions were detected with masses in the range 7-110 amu, but with insufficient mass resolution to permit unambiguous identification. We present details of a new chemical-hydrodynamic model for the coma of comet Halley that includes - for the first time - atomic and molecular anions, in addition to a comprehensive hydrocarbon chemistry. Anion number densities arc calculated as a function of radius in the coma, and compared with the Giotto results. Important anion production mechanisms arc found to include radiative electron attachment, polar photodissociation, dissociative electron attachment, and proton transfer. The polyyne anions C4H(-) and C6H(-) arc found to be likely candidates to explain the Giotto anion mass spectrum in the range 49-73 amu. Thc CN(-) anion probably makes a significant contribution to the mass spectrum at 26 amu. Larger carbon-chain anions such as C8H(1) can explain the peak near 100 amu provided there is a source of large carbon-chain-bearing molecules from the cometary nucleus.

  7. Grading system to categorize breast MRI using BI-RADS 5th edition: a statistical study of non-mass enhancement descriptors in terms of probability of malignancy.

    PubMed

    Asada, Tatsunori; Yamada, Takayuki; Kanemaki, Yoshihide; Fujiwara, Keishi; Okamoto, Satoko; Nakajima, Yasuo

    2018-03-01

    To analyze the association of breast non-mass enhancement descriptors in the BI-RADS 5th edition with malignancy, and to establish a grading system and categorization of descriptors. This study was approved by our institutional review board. A total of 213 patients were enrolled. Breast MRI was performed with a 1.5-T MRI scanner using a 16-channel breast radiofrequency coil. Two radiologists determined internal enhancement and distribution of non-mass enhancement by consensus. Corresponding pathologic diagnoses were obtained by either biopsy or surgery. The probability of malignancy by descriptor was analyzed using Fisher's exact test and multivariate logistic regression analysis. The probability of malignancy by category was analyzed using Fisher's exact and multi-group comparison tests. One hundred seventy-eight lesions were malignant. Multivariate model analysis showed that internal enhancement (homogeneous vs others, p < 0.001, heterogeneous and clumped vs clustered ring, p = 0.003) and distribution (focal and linear vs segmental, p < 0.001) were the significant explanatory variables. The descriptors were classified into three grades of suspicion, and the categorization (3, 4A, 4B, 4C, and 5) by sum-up grades showed an incremental increase in the probability of malignancy (p < 0.0001). The three-grade criteria and categorization by sum-up grades of descriptors appear valid for non-mass enhancement.

  8. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  9. Pre-scission model predictions of fission fragment mass distributions for super-heavy elements

    NASA Astrophysics Data System (ADS)

    Carjan, N.; Ivanyuk, F. A.; Oganessian, Yu. Ts.

    2017-12-01

    The total deformation energy just before the moment of neck rupture for the heaviest nuclei for which spontaneous fission has been detected (Ds281279-, 281Rg and Cn284282-) is calculated. The Strutinsky's prescription is used and nuclear shapes just before scission are described in terms of Cassinian ovals defined for the fixed value of elongation parameter α = 0.98 and generalized by the inclusion of four additional shape parameters: α1, α3, α4, and α6. Supposing that the probability of each point in the deformation space is given by Boltzmann factor, the distribution of the fission-fragment masses is estimated. The octupole deformation α3 at scission is found to play a decisive role in determining the main feature of the mass distribution: symmetric or asymmetric. Only the inclusion of α3 leads to an asymmetric division. Finally, the calculations are extended to an unexplored region of super-heavy nuclei: the even-even Fl (Z = 114), Lv (Z = 116), Og (Z = 118) and (Z = 126) isotopes. For these nuclei, the most probable mass of the light fragment has an almost constant value (≈136) like in the case of the most probable mass of the heavy fragment in the actinide region. It is the neutron shell at 82 that makes this light fragment so stable. Naturally, for very neutron-deficient isotopes, the mass division becomes symmetric when N = 2 × 82.

  10. Evidence of Self-Organized Criticality in Dry Sliding Friction

    NASA Technical Reports Server (NTRS)

    Zypman, Fredy R.; Ferrante, John; Jansen, Mark; Scanlon, Kathleen; Abel, Phillip

    2003-01-01

    This letter presents experimental results on unlubricated friction, which suggests that stick-slip is described by self-organized criticality (SOC). The data, obtained with a pin-on-disc tribometer examines the variation of the friction force as a function of time-or sliding distance. This is the first time that standard tribological equipment has been used to examine the possibility of SOC. The materials were matching pins and discs of aluminium loaded with 250, 500 and 1000 g masses, and matching M50 steel couples loaded with a 1000 g mass. An analysis of the data shows that the probability distribution of slip sizes follows a power law. We perform a careful analysis of all the properties, beyond the two just mentioned, which are required to imply the presence of SOC. Our data strongly support the existence of SOC for stick-slip in dry sliding friction.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petiteau, Antoine; Shang Yu; Babak, Stanislav

    Coalescing massive black hole binaries are the strongest and probably the most important gravitational wave sources in the LISA band. The spin and orbital precessions bring complexity in the waveform and make the likelihood surface richer in structure as compared to the nonspinning case. We introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge (MLDC3.2). The performance of this method is comparable, if not better, to already existing algorithms. We have found all five sources present in MLDC3.2more » and recovered the coalescence time, chirp mass, mass ratio, and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the black holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values.« less

  12. Functional ecology of saltglands in shorebirds: Flexible responses to variable environmental conditions

    USGS Publications Warehouse

    Gutierrez, J.S.; Dietz, M.W.; Masero, J.A.; Gill, Robert E.; Dekinga, Anne; Battley, Phil F.; Sanchez-Guzman, J. M.; Piersma, Theunis

    2012-01-01

    Birds of marine environments have specialized glands to excrete salt, the saltglands. Located on the skull between the eyes, the size of these organs is expected to reflect their demand, which will vary with water turnover rates as a function of environmental (heat load, salinity of prey and drinking water) and organismal (energy demand, physiological state) factors. On the basis of inter- and intraspecific comparisons of saltgland mass (m sg) in 29 species of shorebird (suborder Charadrii) from saline, fresh and mixed water habitats, we assessed the relative roles of organism and environment in determining measured m sg species. The allometric exponent, scaling dry m sg to shorebird total body mass (m b), was significantly higher for coastal marine species (0??88, N=19) than for nonmarine species (0??43, N=14). Within the marine species, those ingesting bivalves intact had significantly higher m sg than species eating soft-bodied invertebrates, indicating that seawater contained within the shells added to the salt load. In red knots (Calidris canutus), dry m sg varied with monthly averaged ambient temperature in a U-shaped way, with the lowest mass at 12??5??C. This probably reflects increased energy demand for thermoregulation at low temperatures and elevated respiratory water loss at high temperatures. In fuelling bar-tailed godwits (Limosa lapponica), dry m sg was positively correlated with intestine mass, an indicator of relative food intake rates. These findings suggest once more that saltgland masses vary within species (and presumably individuals) in relation to salt load, that is a function of energy turnover (thermoregulation and fuelling) and evaporative water needs. Our results support the notion that m sg is strongly influenced by habitat salinity, and also by factors influencing salt load and demand for osmotically free water including ambient temperature, prey type and energy intake rates. Saltglands are evidently highly flexible organs. The small size of saltglands when demands are low suggests that any time costs of adjustment are lower than the costs of maintaining a larger size in this small but essential piece of metabolic machinery. ?? 2011 The Authors. Functional Ecology ?? 2011 British Ecological Society.

  13. Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function

    ERIC Educational Resources Information Center

    Fennell, John; Baddeley, Roland

    2012-01-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…

  14. Order statistics applied to the most massive and most distant galaxy clusters

    NASA Astrophysics Data System (ADS)

    Waizmann, J.-C.; Ettori, S.; Bartelmann, M.

    2013-06-01

    In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.

  15. A NEW METHOD FOR DERIVING THE STELLAR BIRTH FUNCTION OF RESOLVED STELLAR POPULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gennaro, M.; Brown, T. M.; Gordon, K. D.

    We present a new method for deriving the stellar birth function (SBF) of resolved stellar populations. The SBF (stars born per unit mass, time, and metallicity) is the combination of the initial mass function (IMF), the star formation history (SFH), and the metallicity distribution function (MDF). The framework of our analysis is that of Poisson Point Processes (PPPs), a class of statistical models suitable when dealing with points (stars) in a multidimensional space (the measurement space of multiple photometric bands). The theory of PPPs easily accommodates the modeling of measurement errors as well as that of incompleteness. Our method avoidsmore » binning stars in the color–magnitude diagram and uses the whole likelihood function for each data point; combining the individual likelihoods allows the computation of the posterior probability for the population's SBF. Within the proposed framework it is possible to include nuisance parameters, such as distance and extinction, by specifying their prior distributions and marginalizing over them. The aim of this paper is to assess the validity of this new approach under a range of assumptions, using only simulated data. Forthcoming work will show applications to real data. Although it has a broad scope of possible applications, we have developed this method to study multi-band Hubble Space Telescope observations of the Milky Way Bulge. Therefore we will focus on simulations with characteristics similar to those of the Galactic Bulge.« less

  16. Ariadne: a database search engine for identification and chemical analysis of RNA using tandem mass spectrometry data.

    PubMed

    Nakayama, Hiroshi; Akiyama, Misaki; Taoka, Masato; Yamauchi, Yoshio; Nobe, Yuko; Ishikawa, Hideaki; Takahashi, Nobuhiro; Isobe, Toshiaki

    2009-04-01

    We present here a method to correlate tandem mass spectra of sample RNA nucleolytic fragments with an RNA nucleotide sequence in a DNA/RNA sequence database, thereby allowing tandem mass spectrometry (MS/MS)-based identification of RNA in biological samples. Ariadne, a unique web-based database search engine, identifies RNA by two probability-based evaluation steps of MS/MS data. In the first step, the software evaluates the matches between the masses of product ions generated by MS/MS of an RNase digest of sample RNA and those calculated from a candidate nucleotide sequence in a DNA/RNA sequence database, which then predicts the nucleotide sequences of these RNase fragments. In the second step, the candidate sequences are mapped for all RNA entries in the database, and each entry is scored for a function of occurrences of the candidate sequences to identify a particular RNA. Ariadne can also predict post-transcriptional modifications of RNA, such as methylation of nucleotide bases and/or ribose, by estimating mass shifts from the theoretical mass values. The method was validated with MS/MS data of RNase T1 digests of in vitro transcripts. It was applied successfully to identify an unknown RNA component in a tRNA mixture and to analyze post-transcriptional modification in yeast tRNA(Phe-1).

  17. A Statistical Study of the Mass Distribution of Neutron Stars

    NASA Astrophysics Data System (ADS)

    Cheng, Zheng; Zhang, Cheng-Min; Zhao, Yong-Heng; Wang, De-Hua; Pan, Yuan-Yue; Lei, Ya-Juan

    2014-07-01

    By reviewing the methods of mass measurements of neutron stars in four different kinds of systems, i.e., the high-mass X-ray binaries (HMXBs), low-mass X-ray binaries (LMXBs), double neutron star systems (DNSs) and neutron star-white dwarf (NS-WD) binary systems, we have collected the orbital parameters of 40 systems. By using the boot-strap method and the Monte-Carlo method, we have rebuilt the likelihood probability curves of the measured masses of 46 neutron stars. The statistical analysis of the simulation results shows that the masses of neutron stars in the X-ray neutron star systems and those in the radio pulsar systems exhibit different distributions. Besides, the Bayes statistics of these four different kind systems yields the most-probable probability density distributions of these four kind systems to be (1.340 ± 0.230)M8, (1, 505 ± 0.125)M8,(1.335 ± 0.055)M8 and (1.495 ± 0.225)M8, respectively. It is noteworthy that the masses of neutron stars in the HMXB and DNS systems are smaller than those in the other two kind systems by approximately 0.16M8. This result is consistent with the theoretical model of the pulsar to be accelerated to the millisecond order of magnitude via accretion of approximately 0.2M8. If the HMXBs and LMXBs are respectively taken to be the precursors of the BNS and NS-WD systems, then the influence of the accretion effect on the masses of neutron stars in the HMXB systems should be exceedingly small. Their mass distributions should be very close to the initial one during the formation of neutron stars. As for the LMXB and NS-WD systems, they should have already under- gone the process of suffcient accretion, hence there arises rather large deviation from the initial mass distribution.

  18. Dense Cores in Galaxies Out to z = 2.5 in SDSS, UltraVISTA, and the Five 3D-HST/CANDELS Fields

    NASA Astrophysics Data System (ADS)

    van Dokkum, Pieter G.; Bezanson, Rachel; van der Wel, Arjen; Nelson, Erica June; Momcheva, Ivelina; Skelton, Rosalind E.; Whitaker, Katherine E.; Brammer, Gabriel; Conroy, Charlie; Förster Schreiber, Natascha M.; Fumagalli, Mattia; Kriek, Mariska; Labbé, Ivo; Leja, Joel; Marchesini, Danilo; Muzzin, Adam; Oesch, Pascal; Wuyts, Stijn

    2014-08-01

    The dense interiors of massive galaxies are among the most intriguing environments in the universe. In this paper,we ask when these dense cores were formed and determine how galaxies gradually assembled around them. We select galaxies that have a stellar mass >3 × 1010 M ⊙ inside r = 1 kpc out to z = 2.5, using the 3D-HST survey and data at low redshift. Remarkably, the number density of galaxies with dense cores appears to have decreased from z = 2.5 to the present. This decrease is probably mostly due to stellar mass loss and the resulting adiabatic expansion, with some contribution from merging. We infer that dense cores were mostly formed at z > 2.5, consistent with their largely quiescent stellar populations. While the cores appear to form early, the galaxies in which they reside show strong evolution: their total masses increase by a factor of 2-3 from z = 2.5 to z = 0 and their effective radii increase by a factor of 5-6. As a result, the contribution of dense cores to the total mass of the galaxies in which they reside decreases from ~50% at z = 2.5 to ~15% at z = 0. Because of their early formation, the contribution of dense cores to the total stellar mass budget of the universe is a strong function of redshift. The stars in cores with M 1 kpc > 3 × 1010 M ⊙ make up ~0.1% of the stellar mass density of the universe today but 10%-20% at z ~ 2, depending on their initial mass function. The formation of these cores required the conversion of ~1011 M ⊙ of gas into stars within ~1 kpc, while preventing significant star formation at larger radii.

  19. The Cuban scorpion Rhopalurus junceus (Scorpiones, Buthidae): component variations in venom samples collected in different geographical areas

    PubMed Central

    2013-01-01

    Backgound The venom of the Cuban scorpion Rhopalurus junceus is poorly study from the point of view of their components at molecular level and the functions associated. The purpose of this article was to conduct a proteomic analysis of venom components from scorpions collected in different geographical areas of the country. Results Venom from the blue scorpion, as it is called, was collected separately from specimens of five distinct Cuban towns (Moa, La Poa, Limonar, El Chote and Farallones) of the Nipe-Sagua-Baracoa mountain massif and fractionated by high performance liquid chromatography (HPLC); the molecular masses of each fraction were ascertained by mass spectrometry analysis. At least 153 different molecular mass components were identified among the five samples analyzed. Molecular masses varied from 466 to 19755 Da. Scorpion HPLC profiles differed among these different geographical locations and the predominant molecular masses of their components. The most evident differences are in the relative concentration of the venom components. The most abundant components presented molecular weights around 4 kDa, known to be K+-channel specific peptides, and 7 kDa, known to be Na+-channel specific peptides, but with small molecular weight differences. Approximately 30 peptides found in venom samples from the different geographical areas are identical, supporting the idea that they all probably belong to the same species, with some interpopulational variations. Differences were also found in the presence of phospholipase, found in venoms from the Poa area (molecular weights on the order of 14 to 19 kDa). The only ubiquitous enzyme identified in the venoms from all five localities studied (hyaluronidase) presented the same 45 kD molecular mass, identified by gel electrophoresis analysis. Conclusions The venom of these scorpions from different geographical areas seem to be similar, and are rich in peptides that have of the same molecular masses of the peptides purified from other scorpions that affect ion-channel functions. PMID:23849540

  20. Mind Your Ps and Qs: The Interrelation between Period (P) and Mass-ratio (Q) Distributions of Binary Stars

    NASA Astrophysics Data System (ADS)

    Moe, Maxwell; Di Stefano, Rosanne

    2017-06-01

    We compile observations of early-type binaries identified via spectroscopy, eclipses, long-baseline interferometry, adaptive optics, common proper motion, etc. Each observational technique is sensitive to companions across a narrow parameter space of orbital periods P and mass ratios q = {M}{comp}/M 1. After combining the samples from the various surveys and correcting for their respective selection effects, we find that the properties of companions to O-type and B-type main-sequence (MS) stars differ among three regimes. First, at short orbital periods P ≲ 20 days (separations a ≲ 0.4 au), the binaries have small eccentricities e ≲ 0.4, favor modest mass ratios < q> ≈ 0.5, and exhibit a small excess of twins q > 0.95. Second, the companion frequency peaks at intermediate periods log P (days) ≈ 3.5 (a ≈ 10 au), where the binaries have mass ratios weighted toward small values q ≈ 0.2-0.3 and follow a Maxwellian “thermal” eccentricity distribution. Finally, companions with long orbital periods log P (days) ≈ 5.5-7.5 (a ≈ 200-5000 au) are outer tertiary components in hierarchical triples and have a mass ratio distribution across q ≈ 0.1-1.0 that is nearly consistent with random pairings drawn from the initial mass function. We discuss these companion distributions and properties in the context of binary-star formation and evolution. We also reanalyze the binary statistics of solar-type MS primaries, taking into account that 30% ± 10% of single-lined spectroscopic binaries likely contain white dwarf companions instead of low-mass stellar secondaries. The mean frequency of stellar companions with q > 0.1 and log P (days) < 8.0 per primary increases from 0.50 ± 0.04 for solar-type MS primaries to 2.1 ± 0.3 for O-type MS primaries. We fit joint probability density functions f({M}1,q,P,e)\

  1. Utility of the RENAL index -Radius; Exophytic/endophytic; Nearness to sinus; Anterior/posterior; Location relative to polar lines- in the management of renal masses.

    PubMed

    Konstantinidis, C; Trilla, E; Lorente, D; Morote, J

    2016-12-01

    The growing incidence of renal masses and the wide range of available treatments require predictive tools that support the decision making process. The RENAL index -Radius; Exophytic/endophytic; Nearness to sinus; Anterior/posterior; Location relative to polar lines- helps standardise the anatomy of a renal mass by differentiating 3 groups of complexity. Since the introduction of the index, there have been a growing number of studies, some of which have been conflicting, that have evaluated the clinical utility of its implementation. To analyse the scientific evidence on the relationship between the RENAL index and the main strategies for managing renal masses. A search was conducted in the Medline database, which found 576 references on the RENAL index. In keeping with the PRISM Declaration, we selected 100 abstracts and ultimately reviewed 96 articles. The RENAL index has a high degree of interobserver correlation and has been validated as a predictive nomogram of histological results. In active surveillance, the index has been related to the tumour growth rate and probability of nephrectomy. In ablative therapy, the index has been associated with therapeutic efficacy, complications and tumour recurrence. In partial nephrectomy, the index has been related to the rate of complications, conversion to radical surgery, ischaemia time, function preservation and tumour recurrence, a finding also observed in radical nephrectomy. The RENAL index is an objective, reproducible and useful system as a predictive tool of highly relevant clinical parameters such as the rate of complications, ischaemia time, renal function and oncological results in the various currently accepted treatments for the management of renal masses. Copyright © 2016 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. The X-Ray Luminosity-Mass Relation for Local Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Stanek, Rebecca; Evrard, A.; Boehringer, H.; Schuecker, P.; Nord, B.

    2006-12-01

    My thesis is centered on investigating scaling relations of galaxy clusters. Focusing on the relationship between soft X-ray luminosity and mass (L-M) for low-redshift clusters of galaxies, I have determined the mean parameters to 5%, and calculated a formal measure of the scatter in the L-M relation. I model the L-M relation with a conditional probability function including a mean power-law scaling relation, L Mpρsc(z), and log-normal scatter in mass at fixed luminosity, σlnM. Convolving with the halo mass function, I compute expected counts in redshift and flux that, after appropriate survey effects are included, are compared to REFLEX survey data. Combining the likelihood analysis with the measured variance in L-T relation from HIFLUGCS, I obtain fit parameters p=1.59+/-0.05, lnL15,0=1.34+/-0.09, and σlnM=0.37+/-0.05 for self-similar redshift evolution (s = 7/6) in a concordance (Ωm=0.3, ΩΛ=0.7, σ8=0.9) universe. I find a substantially (factor 2) dimmer intercept and slightly steeper slope than the values published using hydrostatic mass estimates of the HIFLUGCS sample and show that a Malmquist bias of the X-ray flux-limited sample accounts for this effect. I accommodate the new WMAP constraints with a compromise model with Ωm=0.24, σ8=0.85, and somewhat lower scatter σlnM=0.25. I will also present work in progress from galaxy cluster population statistics in the Millennium Simulation with Gas (MSG), specifically focusing on the scatter and covariance between cluster properties at a fixed epoch.

  3. Breast mass detection in mammography and tomosynthesis via fully convolutional network-based heatmap regression

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Cain, Elizabeth Hope; Saha, Ashirbani; Zhu, Zhe; Mazurowski, Maciej A.

    2018-02-01

    Breast mass detection in mammography and digital breast tomosynthesis (DBT) is an essential step in computerized breast cancer analysis. Deep learning-based methods incorporate feature extraction and model learning into a unified framework and have achieved impressive performance in various medical applications (e.g., disease diagnosis, tumor detection, and landmark detection). However, these methods require large-scale accurately annotated data. Unfortunately, it is challenging to get precise annotations of breast masses. To address this issue, we propose a fully convolutional network (FCN) based heatmap regression method for breast mass detection, using only weakly annotated mass regions in mammography images. Specifically, we first generate heat maps of masses based on human-annotated rough regions for breast masses. We then develop an FCN model for end-to-end heatmap regression with an F-score loss function, where the mammography images are regarded as the input and heatmaps for breast masses are used as the output. Finally, the probability map of mass locations can be estimated with the trained model. Experimental results on a mammography dataset with 439 subjects demonstrate the effectiveness of our method. Furthermore, we evaluate whether we can use mammography data to improve detection models for DBT, since mammography shares similar structure with tomosynthesis. We propose a transfer learning strategy by fine-tuning the learned FCN model from mammography images. We test this approach on a small tomosynthesis dataset with only 40 subjects, and we show an improvement in the detection performance as compared to training the model from scratch.

  4. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  5. Bioenergetic and pharmacokinetic model for exposure of common loon (Gavia immer) chicks to methylmercury

    USGS Publications Warehouse

    Karasov, W.H.; Kenow, K.P.; Meyer, M.W.; Fournier, F.

    2007-01-01

    A bioenergetics model was used to predict food intake of common loon (Gavia immer) chicks as a function of body mass during development, and a pharmacokinetics model, based on first-order kinetics in a single compartment, was used to predict blood Hg level as a function of food intake rate, food Hg content, body mass, and Hg absorption and elimination. Predictions were tested in captive growing chicks fed trout (Salmo gairdneri) with average MeHg concentrations of 0.02 (control), 0.4, and 1.2 ??g/g wet mass (delivered as CH3HgCl). Predicted food intake matched observed intake through 50 d of age but then exceeded observed intake by an amount that grew progressively larger with age, reaching a significant overestimate of 28% by the end of the trial. Respiration in older, nongrowing birds probably was overestimated by using rates measured in younger, growing birds. Close agreement was found between simulations and measured blood Hg, which varied significantly with dietary Hg and age. Although chicks may hatch with different blood Hg levels, their blood level is determined mainly by dietary Hg level beyond approximately two weeks of age. The model also may be useful for predicting Hg levels in adults and in the eggs that they lay, but its accuracy in both chicks and adults needs to be tested in free-living birds. ?? 2007 SETAC.

  6. Generalizing the ADM computation to quantum field theory

    NASA Astrophysics Data System (ADS)

    Mora, P. J.; Tsamis, N. C.; Woodard, R. P.

    2012-01-01

    The absence of recognizable, low energy quantum gravitational effects requires that some asymptotic series expansion be wonderfully accurate, but the correct expansion might involve logarithms or fractional powers of Newton’s constant. That would explain why conventional perturbation theory shows uncontrollable ultraviolet divergences. We explore this possibility in the context of the mass of a charged, gravitating scalar. The classical limit of this system was solved exactly in 1960 by Arnowitt, Deser and Misner, and their solution does exhibit nonanalytic dependence on Newton’s constant. We derive an exact functional integral representation for the mass of the quantum field theoretic system, and then develop an alternate expansion for it based on a correct implementation of the method of stationary phase. The new expansion entails adding an infinite class of new diagrams to each order and subtracting them from higher orders. The zeroth-order term of the new expansion has the physical interpretation of a first quantized Klein-Gordon scalar which forms a bound state in the gravitational and electromagnetic potentials sourced by its own probability current. We show that such bound states exist and we obtain numerical results for their masses.

  7. Characteristic Structure of Star-forming Clouds

    NASA Astrophysics Data System (ADS)

    Myers, Philip C.

    2015-06-01

    This paper presents a new method to diagnose the star-forming potential of a molecular cloud region from the probability density function of its column density (N-pdf). This method provides expressions for the column density and mass profiles of a symmetric filament having the same N-pdf as a filamentary region. The central concentration of this characteristic filament can distinguish regions and can quantify their fertility for star formation. Profiles are calculated for N-pdfs which are pure lognormal, pure power law, or a combination. In relation to models of singular polytropic cylinders, characteristic filaments can be unbound, bound, or collapsing depending on their central concentration. Such filamentary models of the dynamical state of N-pdf gas are more relevant to star-forming regions than are spherical collapse models. The star formation fertility of a bound or collapsing filament is quantified by its mean mass accretion rate when in radial free fall. For a given mass per length, the fertility increases with the filament mean column density and with its initial concentration. In selected regions the fertility of their characteristic filaments increases with the level of star formation.

  8. A performance-based approach to landslide risk analysis

    NASA Astrophysics Data System (ADS)

    Romeo, R. W.

    2009-04-01

    An approach for the risk assessment based on a probabilistic analysis of the performance of structures threatened by landslides is shown and discussed. The risk is a possible loss due to the occurrence of a potentially damaging event. Analytically the risk is the probability convolution of hazard, which defines the frequency of occurrence of the event (i.e., the demand), and fragility that defines the capacity of the system to withstand the event given its characteristics (i.e., severity) and those of the exposed goods (vulnerability), that is: Risk=p(D>=d|S,V) The inequality sets a damage (or loss) threshold beyond which the system's performance is no longer met. Therefore a consistent approach to risk assessment should: 1) adopt a probabilistic model which takes into account all the uncertainties of the involved variables (capacity and demand), 2) follow a performance approach based on given loss or damage thresholds. The proposed method belongs to the category of the semi-empirical ones: the theoretical component is given by the probabilistic capacity-demand model; the empirical component is given by the observed statistical behaviour of structures damaged by landslides. Two landslide properties alone are required: the area-extent and the type (or kinematism). All other properties required to determine the severity of landslides (such as depth, speed and frequency) are derived via probabilistic methods. The severity (or intensity) of landslides, in terms of kinetic energy, is the demand of resistance; the resistance capacity is given by the cumulative distribution functions of the limit state performance (fragility functions) assessed via damage surveys and cards compilation. The investigated limit states are aesthetic (of nominal concern alone), functional (interruption of service) and structural (economic and social losses). The damage probability is the probabilistic convolution of hazard (the probability mass function of the frequency of occurrence of given severities) and vulnerability (the probability of a limit state performance be reached, given a certain severity). Then, for each landslide all the exposed goods (structures and infrastructures) within the landslide area and within a buffer (representative of the maximum extension of a landslide given a reactivation), are counted. The risk is the product of the damage probability and the ratio of the exposed goods of each landslide to the whole assets exposed to the same type of landslides. Since the risk is computed numerically and by the same procedure applied to all landslides, it is free from any subjective assessment such as those implied in the qualitative methods.

  9. Effects of chronic acceleration on body composition

    NASA Technical Reports Server (NTRS)

    Pitts, G. C.

    1982-01-01

    Studies of the centrifugation of adult rats showed an unexpected decrease in the mass of fat-free muscle and bone, in spite of the added load induced by centrifugation. It is suggested that the lower but constant fat-free body mass was probably regulated during centrifugation. Rats placed in weightless conditions for 18.5 days gave indirect but strong evidence that the muscle had increased in mass. Other changes in the rats placed in weightless conditions included a smaller fraction of skeletal mineral, a smaller fraction of water in the total fat-free body, and a net shift of fluid from skin to viscera. Adult rats centrifuged throughout the post-weaning growth period exhibited smaller masses of bone and central nervous system (probably attributable to slower growth of the total body), and a larger mass of skin than controls at 1 G. Efforts at simulating the effects of weightlessness or centrifugation on the body composition of rats by regimens at terrestrial gravity were inconclusive.

  10. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  11. Economic Choices Reveal Probability Distortion in Macaque Monkeys

    PubMed Central

    Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-01-01

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. PMID:25698750

  12. Economic choices reveal probability distortion in macaque monkeys.

    PubMed

    Stauffer, William R; Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-02-18

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. Copyright © 2015 Stauffer et al.

  13. A deep view on the Virgo cluster core

    NASA Astrophysics Data System (ADS)

    Lieder, S.; Lisker, T.; Hilker, M.; Misgeld, I.; Durrell, P.

    2012-02-01

    Studies of dwarf spheroidal (dSph) galaxies with statistically significant sample sizes are still rare beyond the Local Group, since these low surface brightness objects can only be identified with deep imaging data. In galaxy clusters, where they constitute the dominant population in terms of number, they represent the faint end slope of the galaxy luminosity function and provide important insight on the interplay between galaxy mass and environment. In this study we investigate the optical photometric properties of early-type galaxies (dwarf ellipticals (dEs) and dSphs) in the Virgo cluster core region, by analysing their location on the colour magnitude relation (CMR) and the structural scaling relations down to faint magnitudes, and by constructing the luminosity function to compare it with theoretical expectations. Our work is based on deep CFHT V- and I-band data covering several square degrees of the Virgo cluster core that were obtained in 1999 using the CFH12K instrument. We visually select potential cluster members based on morphology and angular size, excluding spiral galaxies. A photometric analysis has been carried out for 295 galaxies, using surface brightness profile shape and colour as further criteria to identify probable background contaminants. 216 galaxies are considered to be certain or probable Virgo cluster members. Our study reveals 77 galaxies not catalogued in the VCC (with 13 of them already found in previous studies) that are very likely Virgo cluster members because they follow the Virgo CMR and exhibit low Sérsic indices. Those galaxies reach MV = -8.7 mag. The CMR shows a clear change in slope from dEs to dSphs, while the scatter of the CMR in the dSph regime does not increase significantly. Our sample might, however, be somewhat biased towards redder colours. The scaling relations given by the dEs appear to be continued by the dSphs indicating a similar origin. The observed change in the CMR slope may mark the point at which gas loss prevented significant metal enrichment. The almost constant scatter around the CMR possibly indicates a short formation period, resulting in similar stellar populations. The luminosity function shows a Schechter function's faint end slope of α = -1.50 ± 0.17, implying a lack of galaxies related to the expected number of low-mass dark matter haloes from theoretical models. Our findings could be explained by suppressed star formation in low-mass dark matter halos or by tidal disruption of dwarfs in the dense core region of the cluster. Tables 3 and 4 are available in electronic form at http://www.aanda.org

  14. Addressing the too big to fail problem with baryon physics and sterile neutrino dark matter

    NASA Astrophysics Data System (ADS)

    Lovell, Mark R.; Gonzalez-Perez, Violeta; Bose, Sownak; Boyarsky, Alexey; Cole, Shaun; Frenk, Carlos S.; Ruchayskiy, Oleg

    2017-07-01

    N-body dark matter simulations of structure formation in the Λ cold dark matter (ΛCDM) model predict a population of subhaloes within Galactic haloes that have higher central densities than inferred for the Milky Way satellites, a tension known as the 'too big to fail' problem. Proposed solutions include baryonic effects, a smaller mass for the Milky Way halo and warm dark matter (WDM). We test these possibilities using a semi-analytic model of galaxy formation to generate luminosity functions for Milky Way halo-analogue satellite populations, the results of which are then coupled to the Jiang & van den Bosch model of subhalo stripping to predict the subhalo Vmax functions for the 10 brightest satellites. We find that selecting the brightest satellites (as opposed to the most massive) and modelling the expulsion of gas by supernovae at early times increases the likelihood of generating the observed Milky Way satellite Vmax function. The preferred halo mass is 6 × 1011 M⊙, which has a 14 per cent probability to host a Vmax function like that of the Milky Way satellites. We conclude that the Milky Way satellite Vmax function is compatible with a CDM cosmology, as previously found by Sawala et al. using hydrodynamic simulations. Sterile neutrino-WDM models achieve a higher degree of agreement with the observations, with a maximum 50 per cent chance of generating the observed Milky Way satellite Vmax function. However, more work is required to check that the semi-analytic stripping model is calibrated correctly for each sterile neutrino cosmology.

  15. Measurement of polycyclic aromatic hydrocarbon (PAHs) in interplanetary dust particles

    NASA Technical Reports Server (NTRS)

    Clemett, S. J.; Maechling, C. R.; Zare, R. N.; Swan, P. D.; Walker, R. M.

    1993-01-01

    We report here the first definitive measurements of specific organic molecules (polycyclic aromatic hydrocarbons (PAH's)) in interplanetary dust particles (IDP's). An improved version of the microbeam-two-step laser mass spectrometer was used for the analysis. Two IDP's gave similar mass spectra showing an abundance of PAH's. Control samples, including particles of probable terrestrial origin from the same stratospheric collector, gave either null results or quite different spectra. We conclude that the PAH's are probably indigenous to the IDP's and are not terrestrial contaminants. The instrument used to study the particles is a two-step laser mass spectrometer. Constituent neutral molecules of the sample are first desorbed with a pulsed infrared laser beam focussed to 40 micrometers. In the second step, PAH's in the desorbed plume are preferentially ionized by a pulsed UV laser beam. Resulting ions produced by resonant absorption are extracted into a reflectron time-of-flight mass spectrometer. This instrument has high spatial resolution, high ion transmission, unlimited mass range, and multichannel detection of all ion masses from a single laser shot.

  16. Activated recombinative desorption: A potential component in mechanisms of spacecraft glow

    NASA Technical Reports Server (NTRS)

    Cross, J. B.

    1985-01-01

    The concept of activated recombination of atomic species on surfaces can explain the production of vibrationally and translationally excited desorbed molecular species. Equilibrium statistical mechanics predicts that the molecular quantum state distributions of desorbing molecules is a function of surface temperature only when the adsorption probability is unity and independent of initial collision conditions. In most cases, the adsorption probability is dependent upon initial conditions such as collision energy or internal quantum state distribution of impinging molecules. From detailed balance, such dynamical behavior is reflected in the internal quantum state distribution of the desorbing molecule. This concept, activated recombinative desorption, may offer a common thread in proposed mechanisms of spacecraft glow. Using molecular beam techniques and equipment available at Los Alamos, which includes a high translational energy 0-atom beam source, mass spectrometric detection of desorbed species, chemiluminescence/laser induced fluorescence detection of electronic and vibrationally excited reaction products, and Auger detection of surface adsorbed reaction products, a fundamental study of the gas surface chemistry underlying the glow process is proposed.

  17. Factors Influencing the Incidence of Obesity in Australia: A Generalized Ordered Probit Model.

    PubMed

    Avsar, Gulay; Ham, Roger; Tannous, W Kathy

    2017-02-10

    The increasing health costs of and the risks factors associated with obesity are well documented. From this perspective, it is important that the propensity of individuals towards obesity is analyzed. This paper uses longitudinal data from the Household Income and Labour Dynamics in Australia (HILDA) Survey for 2005 to 2010 to model those variables which condition the probability of being obese. The model estimated is a random effects generalized ordered probit, which exploits two sources of heterogeneity; the individual heterogeneity of panel data models and heterogeneity across body mass index (BMI) categories. The latter is associated with non-parallel thresholds in the generalized ordered model, where the thresholds are functions of the conditioning variables, which comprise economic, social, and demographic and lifestyle variables. To control for potential predisposition to obesity, personality traits augment the empirical model. The results support the view that the probability of obesity is significantly determined by the conditioning variables. Particularly, personality is found to be important and these outcomes reinforce other work examining personality and obesity.

  18. Biogenesis and early life on Earth and Europa: favored by an alkaline ocean?

    PubMed

    Kempe, Stephan; Kazmierczak, Jozef

    2002-01-01

    Recent discoveries about Europa--the probable existence of a sizeable ocean below its ice crust; the detection of hydrated sodium carbonates, among other salts; and the calculation of a net loss of sodium from the subsurface--suggest the existence of an alkaline ocean. Alkaline oceans (nicknamed "soda oceans" in analogy to terrestrial soda lakes) have been hypothesized also for early Earth and Mars on the basis of mass balance considerations involving total amounts of acids available for weathering and the composition of the early crust. Such an environment could be favorable to biogenesis since it may have provided for very low Ca2+ concentrations mandatory for the biochemical function of proteins. A rapid loss of CO2 from Europa's atmosphere may have led to freezing oceans. Alkaline brine bubbles embedded in ice in freezing and impact-thawing oceans could have provided a suitable environment for protocell formation and the large number of trials needed for biogenesis. Understanding these processes could be central to assessing the probability of life on Europa.

  19. Spatial distribution and occurrence probability of regional new particle formation events in eastern China

    NASA Astrophysics Data System (ADS)

    Shen, Xiaojing; Sun, Junying; Kivekäs, Niku; Kristensson, Adam; Zhang, Xiaoye; Zhang, Yangmei; Zhang, Lu; Fan, Ruxia; Qi, Xuefei; Ma, Qianli; Zhou, Huaigang

    2018-01-01

    In this work, the spatial extent of new particle formation (NPF) events and the relative probability of observing particles originating from different spatial origins around three rural sites in eastern China were investigated using the NanoMap method, using particle number size distribution (PNSD) data and air mass back trajectories. The length of the datasets used were 7, 1.5, and 3 years at rural sites Shangdianzi (SDZ) in the North China Plain (NCP), Mt. Tai (TS) in central eastern China, and Lin'an (LAN) in the Yangtze River Delta region in eastern China, respectively. Regional NPF events were observed to occur with the horizontal extent larger than 500 km at SDZ and TS, favoured by the fast transport of northwesterly air masses. At LAN, however, the spatial footprint of NPF events was mostly observed around the site within 100-200 km. Difference in the horizontal spatial distribution of new particle source areas at different sites was connected to typical meteorological conditions at the sites. Consecutive large-scale regional NPF events were observed at SDZ and TS simultaneously and were associated with a high surface pressure system dominating over this area. Simultaneous NPF events at SDZ and LAN were seldom observed. At SDZ the polluted air masses arriving over the NCP were associated with higher particle growth rate (GR) and new particle formation rate (J) than air masses from Inner Mongolia (IM). At TS the same phenomenon was observed for J, but GR was somewhat lower in air masses arriving over the NCP compared to those arriving from IM. The capability of NanoMap to capture the NPF occurrence probability depends on the length of the dataset of PNSD measurement but also on topography around the measurement site and typical air mass advection speed during NPF events. Thus the long-term measurements of PNSD in the planetary boundary layer are necessary in the further study of spatial extent and the probability of NPF events. The spatial extent, relative probability of occurrence, and typical evolution of PNSD during NPF events presented in this study provide valuable information to further understand the climate and air quality effects of new particle formation.

  20. Functional group composition of organic aerosol from combustion emissions and secondary processes at two contrasted urban environments

    NASA Astrophysics Data System (ADS)

    El Haddad, Imad; Marchand, Nicolas; D'Anna, Barbara; Jaffrezo, Jean Luc; Wortham, Henri

    2013-08-01

    The quantification of major functional groups in atmospheric organic aerosol (OA) provides a constraint on the types of compounds emitted and formed in atmospheric conditions. This paper presents functional group composition of organic aerosol from two contrasted urban environments: Marseille during summer and Grenoble during winter. Functional groups were determined using a tandem mass spectrometry approach, enabling the quantification of carboxylic (RCOOH), carbonyl (RCOR‧), and nitro (RNO2) functional groups. Using a multiple regression analysis, absolute concentrations of functional groups were combined with those of organic carbon derived from different sources in order to infer the functional group contents of different organic aerosol fractions. These fractions include fossil fuel combustion emissions, biomass burning emissions and secondary organic aerosol (SOA). Results clearly highlight the differences between functional group fingerprints of primary and secondary OA fractions. OA emitted from primary sources is found to be moderately functionalized, as about 20 carbons per 1000 bear one of the functional groups determined here, whereas SOA is much more functionalized, as in average 94 carbons per 1000 bear a functional group under study. Aging processes appear to increase both RCOOH and RCOR‧ functional group contents by nearly one order of magnitude. Conversely, RNO2 content is found to decrease with photochemical processes. Finally, our results also suggest that other functional groups significantly contribute to biomass smoke and SOA. In particular, for SOA, the overall oxygen content, assessed using aerosol mass spectrometer measurements by an O:C ratio of 0.63, is significantly higher than the apparent O:C* ratio of 0.17 estimated based on functional groups measured here. A thorough examination of our data suggests that this remaining unexplained oxygen content can be most probably assigned to alcohol (ROH), organic peroxides (ROOH), organonitrates (RONO2) and/or organosulfates (ROSO3H).

  1. Performance of Shear Wave Elastography for Differentiation of Benign and Malignant Solid Breast Masses

    PubMed Central

    Song, Yi-Jiang; Deng, Zhu-Jun; Gao, Jian; Xie, Yan; Yin, Tian-Sheng; Ying, Li; Tang, Kai-Fu

    2013-01-01

    Objectives To perform a meta-analysis assessing the ability of shear wave elastography (SWE) to identify malignant breast masses. Methods PubMed, the Cochrane Library, and the ISI Web of Knowledge were searched for studies evaluating the accuracy of SWE for identifying malignant breast masses. The diagnostic accuracy of SWE was evaluated according to sensitivity, specificity, and hierarchical summary receiver operating characteristic (HSROC) curves. An analysis was also performed according to the SWE mode used: supersonic shear imaging (SSI) and the acoustic radiation force impulse (ARFI) technique. The clinical utility of SWE for identifying malignant breast masses was evaluated using analysis of Fagan plot. Results A total of 9 studies, including 1888 women and 2000 breast masses, were analyzed. Summary sensitivities and specificities were 0.91 (95% confidence interval [CI], 0.88–0.94) and 0.82 (95% CI, 0.75–0.87) by SSI and 0.89 (95% CI, 0.81–0.94) and 0.91 (95% CI, 0.84–0.95) by ARFI, respectively. The HSROCs for SSI and ARFI were 0.92 (95% CI, 0.90–0.94) and 0.96 (95% CI, 0.93–0.97), respectively. SSI and ARFI were both very informative, with probabilities of 83% and 91%, respectively, for correctly differentiating between benign and malignant breast masses following a “positive” measurement (over the threshold value) and probabilities of disease as low as 10% and 11%, respectively, following a “negative” measurement (below the threshold value) when the pre-test probability was 50%. Conclusions SWE could be used as a good identification tool for the classification of breast masses. PMID:24204613

  2. The reionization times of z=0 galaxies

    NASA Astrophysics Data System (ADS)

    Aubert, Dominique

    2018-05-01

    We study the inhomogeneity of the reionization process by comparing the reionization times of z = 0 galaxies as a function of their mass. For this purpose, we combine the results of the CODA-I AMR radiative hydrodynamics simulation of the Reionization with the halo merger trees of a pure dark matter tree-code z = 0 simulation evolved from the same set of initial conditions. We find that galaxies with M(z = 0) > 1011M⊙ are reionized earlier than the whole Universe, with e.g. MW-like haloes reionized between 100 and 300 million years before the diffuse IGM. Lighter galaxies reionized as late as the global volume, probably from external radiation.

  3. Zero-truncated negative binomial - Erlang distribution

    NASA Astrophysics Data System (ADS)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  4. Peptide identification

    DOEpatents

    Jarman, Kristin H [Richland, WA; Cannon, William R [Richland, WA; Jarman, Kenneth D [Richland, WA; Heredia-Langner, Alejandro [Richland, WA

    2011-07-12

    Peptides are identified from a list of candidates using collision-induced dissociation tandem mass spectrometry data. A probabilistic model for the occurrence of spectral peaks corresponding to frequently observed partial peptide fragment ions is applied. As part of the identification procedure, a probability score is produced that indicates the likelihood of any given candidate being the correct match. The statistical significance of the score is known without necessarily having reference to the actual identity of the peptide. In one form of the invention, a genetic algorithm is applied to candidate peptides using an objective function that takes into account the number of shifted peaks appearing in the candidate spectrum relative to the test spectrum.

  5. a Point-Like Picture of the Hydrogen Atom

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Jangjoo, A.; Khani, M.

    A point-like picture of the Schrödinger solution for hydrogen atom is worked to emphasize that "point-like particles" may describe as "probability wave function". In each case, the three-dimensional shape of the |Ψnlm(rn, cosθ)|2 is plotted and the paths of the point-like electron (it is better to say reduced mass of the pair particles) are described in each closed shell. Finally, the orbital shape of the molecules are given according to the present simple model. In our opinion, "interpretations of the Correspondence Principle", which is a basic principle in all elementary quantum text, seems to be reviewed again!

  6. On the definition of absorbed dose

    NASA Astrophysics Data System (ADS)

    Grusell, Erik

    2015-02-01

    Purpose: The quantity absorbed dose is used extensively in all areas concerning the interaction of ionizing radiation with biological organisms, as well as with matter in general. The most recent and authoritative definition of absorbed dose is given by the International Commission on Radiation Units and Measurements (ICRU) in ICRU Report 85. However, that definition is incomplete. The purpose of the present work is to give a rigorous definition of absorbed dose. Methods: Absorbed dose is defined in terms of the random variable specific energy imparted. A random variable is a mathematical function, and it cannot be defined without specifying its domain of definition which is a probability space. This is not done in report 85 by the ICRU, mentioned above. Results: In the present work a definition of a suitable probability space is given, so that a rigorous definition of absorbed dose is possible. This necessarily includes the specification of the experiment which the probability space describes. In this case this is an irradiation, which is specified by the initial particles released and by the material objects which can interact with the radiation. Some consequences are discussed. Specific energy imparted is defined for a volume, and the definition of absorbed dose as a point function involves the specific energy imparted for a small mass contained in a volume surrounding the point. A possible more precise definition of this volume is suggested and discussed. Conclusions: The importance of absorbed dose motivates a proper definition, and one is given in the present work. No rigorous definition has been presented before.

  7. Limitations of Reliability for Long-Endurance Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Owens, Andrew C.; de Weck, Olivier L.

    2016-01-01

    Long-endurance human spaceflight - such as missions to Mars or its moons - will present a never-before-seen maintenance logistics challenge. Crews will be in space for longer and be farther way from Earth than ever before. Resupply and abort options will be heavily constrained, and will have timescales much longer than current and past experience. Spare parts and/or redundant systems will have to be included to reduce risk. However, the high cost of transportation means that this risk reduction must be achieved while also minimizing mass. The concept of increasing system and component reliability is commonly discussed as a means to reduce risk and mass by reducing the probability that components will fail during a mission. While increased reliability can reduce maintenance logistics mass requirements, the rate of mass reduction decreases over time. In addition, reliability growth requires increased test time and cost. This paper assesses trends in test time requirements, cost, and maintenance logistics mass savings as a function of increase in Mean Time Between Failures (MTBF) for some or all of the components in a system. In general, reliability growth results in superlinear growth in test time requirements, exponential growth in cost, and sublinear benefits (in terms of logistics mass saved). These trends indicate that it is unlikely that reliability growth alone will be a cost-effective approach to maintenance logistics mass reduction and risk mitigation for long-endurance missions. This paper discusses these trends as well as other options to reduce logistics mass such as direct reduction of part mass, commonality, or In-Space Manufacturing (ISM). Overall, it is likely that some combination of all available options - including reliability growth - will be required to reduce mass and mitigate risk for future deep space missions.

  8. Cluster analysis of the organic peaks in bulk mass spectra obtained during the 2002 New England Air Quality Study with an Aerodyne aerosol mass spectrometer

    NASA Astrophysics Data System (ADS)

    Marcolli, C.; Canagaratna, M. R.; Worsnop, D. R.; Bahreini, R.; de Gouw, J. A.; Warneke, C.; Goldan, P. D.; Kuster, W. C.; Williams, E. J.; Lerner, B. M.; Roberts, J. M.; Meagher, J. F.; Fehsenfeld, F. C.; Marchewka, M. L.; Bertman, S. B.; Middlebrook, A. M.

    2006-06-01

    We applied hierarchical cluster analysis to an Aerodyne aerosol mass spectrometer (AMS) bulk mass spectral dataset collected aboard the NOAA research vessel Ronald H. Brown during the 2002 New England Air Quality Study off the east coast of the United States. Emphasizing the organic peaks, the cluster analysis yielded a series of categories that are distinguishable with respect to their mass spectra and their occurrence as a function of time. The differences between the categories mainly arise from relative intensity changes rather than from the presence or absence of specific peaks. The most frequent category exhibits a strong signal at m/z 44 and represents oxidized organic matter most probably originating from both, anthropogenic as well as biogenic sources. On the basis of spectral and trace gas correlations, the second most common category with strong signals at m/z 29, 43, and 44 contains contributions from isoprene oxidation products. The third through the fifth most common categories have peak patterns characteristic of monoterpene oxidation products and were most frequently observed when air masses from monoterpene rich regions were sampled. Taken together, the second through the fifth most common categories represent as much as 5 µg/m3 organic aerosol mass - 17% of the total organic mass - that can be attributed to biogenic sources. These numbers have to be viewed as lower limits since the most common category was attributed to anthropogenic sources for this calculation. The cluster analysis was also very effective in identifying a few contaminated mass spectra that were not removed during pre-processing. This study demonstrates that hierarchical clustering is a useful tool to analyze the complex patterns of the organic peaks in bulk aerosol mass spectra from a field study.

  9. The Mass Surface Density Distribution of a High-Mass Protocluster forming from an IRDC and GMC

    NASA Astrophysics Data System (ADS)

    Lim, Wanggi; Tan, Jonathan C.; Kainulainen, Jouni; Ma, Bo; Butler, Michael

    2016-01-01

    We study the probability distribution function (PDF) of mass surface densities of infrared dark cloud (IRDC) G028.36+00.07 and its surrounding giant molecular cloud (GMC). Such PDF analysis has the potential to probe the physical processes that are controlling cloud structure and star formation activity. The chosen IRDC is of particular interest since it has almost 100,000 solar masses within a radius of 8 parsecs, making it one of the most massive, dense molecular structures known and is thus a potential site for the formation of a high-mass, "super star cluster". We study mass surface densities in two ways. First, we use a combination of NIR, MIR and FIR extinction maps that are able to probe the bulk of the cloud structure that is not yet forming stars. This analysis also shows evidence for flattening of the IR extinction law as mass surface density increases, consistent with increasing grain size and/or growth of ice mantles. Second, we study the FIR and sub-mm dust continuum emission from the cloud, especially utlizing Herschel PACS and SPIRE images. We first subtract off the contribution of the foreground diffuse emission that contaminates these images. Next we examine the effects of background subtraction and choice of dust opacities on the derived mass surface density PDF. The final derived PDFs from both methods are compared, including also with other published studies of this cloud. The implications for theoretical models and simulations of cloud structure, including the role of turbulence and magnetic fields, are discussed.

  10. Modeling of Plutonium Ionization Probabilities for Use in Nuclear Forensic Analysis by Resonance Ionization Mass Spectrometry

    DTIC Science & Technology

    2016-12-01

    masses collide, they form a supercritical mass . Criticality refers to the neutron population within the system. A critical system is one that can...Spectrometry, no. 242, pp. 161–168, 2005. [9] S. Raeder, “Trace analysis of actinides in the environment by means of resonance ionization mass ...first ionization potential of actinide elements by resonance ionization mass spectrometry.” Spectrochimica Acta part B: Atomic Spectroscopy. vol. 52

  11. Collisions of slow ions C3Hn+ and C3Dn+ (n = 2-8) with room temperature carbon surfaces: mass spectra of product ions and the ion survival probability.

    PubMed

    Pysanenko, Andriy; Zabka, Jan; Feketeová, Linda; Märk, Tilmann D; Herman, Zdenek

    2008-01-01

    Collisions of C3Hn+ (n = 2-8) ions and some of their per- deuterated analogs with room temperature carbon (HOPG) surfaces (hydrocarbon-covered) were investigated over the incident energy range 13-45 eV in beam scattering experiments. The mass spectra of product ions were measured and main fragmentation paths of the incident projectile ions, energized in the surface collision, were determined. The extent of fragmentation increased with increasing incident energy. Mass spectra of even-electron ions C3H7+ and C3H5+ showed only fragmentations, mass spectra of radical cations C3H8*+ and C3H6*+ showed both simple fragmentations of the projectile ion and formation of products of its surface chemical reaction (H-atom transfer between the projectile ion and hydrocarbons on the surface). No carbon-chain build-up reaction (formation of C4 hydrocarbons) was detected. The survival probability of the incident ions, S(a), was usually found to be about 1-2% for the radical cation projectile ions C3H8*+, C3H6*+, C3H4*+ and C3H2*+ and several percent up to about 20% for the even-electron projectile ions C3H7+, C3H5+, C3H3+. A plot of S(a) values of C1, C2, C3, some C7 hydrocarbon ions, Ar+ and CO2+ on hydrocarbon-covered carbon surfaces as a function of the ionization energies (IE) of the projectile species showed a drop from about 10% to about 1% and less at IE 8.5-9.5 eV and further decrease with increasing IE. A strong correlation was found between log S(a) and IE, a linear decrease over the entire range of IE investigated (7-16 eV), described by log S(a) = (3.9 +/- 0.5)-(0.39 +/- 0.04) IE.

  12. Modeling dust growth in protoplanetary disks: The breakthrough case

    NASA Astrophysics Data System (ADS)

    Drążkowska, J.; Windmark, F.; Dullemond, C. P.

    2014-07-01

    Context. Dust coagulation in protoplanetary disks is one of the initial steps toward planet formation. Simple toy models are often not sufficient to cover the complexity of the coagulation process, and a number of numerical approaches are therefore used, among which integration of the Smoluchowski equation and various versions of the Monte Carlo algorithm are the most popular. Aims: Recent progress in understanding the processes involved in dust coagulation have caused a need for benchmarking and comparison of various physical aspects of the coagulation process. In this paper, we directly compare the Smoluchowski and Monte Carlo approaches to show their advantages and disadvantages. Methods: We focus on the mechanism of planetesimal formation via sweep-up growth, which is a new and important aspect of the current planet formation theory. We use realistic test cases that implement a distribution in dust collision velocities. This allows a single collision between two grains to have a wide range of possible outcomes but also requires a very high numerical accuracy. Results: For most coagulation problems, we find a general agreement between the two approaches. However, for the sweep-up growth driven by the "lucky" breakthrough mechanism, the methods exhibit very different resolution dependencies. With too few mass bins, the Smoluchowski algorithm tends to overestimate the growth rate and the probability of breakthrough. The Monte Carlo method is less dependent on the number of particles in the growth timescale aspect but tends to underestimate the breakthrough chance due to its limited dynamic mass range. Conclusions: We find that the Smoluchowski approach, which is generally better for the breakthrough studies, is sensitive to low mass resolutions in the high-mass, low-number tail that is important in this scenario. To study the low number density features, a new modulation function has to be introduced to the interaction probabilities. As the minimum resolution needed for breakthrough studies depends strongly on setup, verification has to be performed on a case by case basis.

  13. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  14. Hydrogen collisions with transition metal surfaces: Universal electronically nonadiabatic adsorption

    NASA Astrophysics Data System (ADS)

    Dorenkamp, Yvonne; Jiang, Hongyan; Köckert, Hansjochen; Hertl, Nils; Kammler, Marvin; Janke, Svenja M.; Kandratsenka, Alexander; Wodtke, Alec M.; Bünermann, Oliver

    2018-01-01

    Inelastic scattering of H and D atoms from the (111) surfaces of six fcc transition metals (Au, Pt, Ag, Pd, Cu, and Ni) was investigated, and in each case, excitation of electron-hole pairs dominates the inelasticity. The results are very similar for all six metals. Differences in the average kinetic energy losses between metals can mainly be attributed to different efficiencies in the coupling to phonons due to the different masses of the metal atoms. The experimental observations can be reproduced by molecular dynamics simulations based on full-dimensional potential energy surfaces and including electronic excitations by using electronic friction in the local density friction approximation. The determining factors for the energy loss are the electron density at the surface, which is similar for all six metals, and the mass ratio between the impinging atoms and the surface atoms. Details of the electronic structure of the metal do not play a significant role. The experimentally validated simulations are used to explore sticking over a wide range of incidence conditions. We find that the sticking probability increases for H and D collisions near normal incidence—consistent with a previously reported penetration-resurfacing mechanism. The sticking probability for H or D on any of these metals may be represented as a simple function of the incidence energy, Ein, metal atom mass, M, and incidence angle, 𝜗i n. S =(S0+a ṡEi n+b ṡM ) *(1 -h (𝜗i n-c ) (1 -cos(𝜗 i n-c ) d ṡh (Ei n-e ) (Ei n-e ) ) ) , where h is the Heaviside step function and for H, S0 = 1.081, a = -0.125 eV-1, b =-8.40 ṡ1 0-4 u-1, c = 28.88°, d = 1.166 eV-1, and e = 0.442 eV; whereas for D, S0 = 1.120, a = -0.124 eV-1, b =-1.20 ṡ1 0-3 u-1, c = 28.62°, d = 1.196 eV-1, and e = 0.474 eV.

  15. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  16. Optimization of armored spherical tanks for storage on the lunar surface

    NASA Technical Reports Server (NTRS)

    Bents, D. J.; Knight, D. A.

    1992-01-01

    A redundancy strategy for reducing micrometeroid armoring mass is investigated, with application to cryogenic reactant storage for a regenerative fuel cell (RFC) on the lunar surface. In that micrometeoroid environment, the cryogenic fuel must be protected from loss due to tank puncture. The tankage must have a sufficiently high probability of survival over the length of the mission so that the probability of system failure due to tank puncture is low compared to the other mission risk factors. Assuming that a single meteoroid penetration can cause a storage tank to lose its contents, two means are available to raise the probability of surviving micrometeoroid attack to the desired level. One can armor the tanks to a thickness sufficient to reduce probability of penetration of any tank to the desired level or add extra capacity in the form of spare tanks that results in survival of a given number out of the ensemble at the desired level. A combination of these strategies (armoring and redundancy) is investigated. The objective is to find the optimum combination which yields the lowest shielding mass per cubic meter of surviving fuel out of the original ensemble. The investigation found that, for the volumes of fuel associated with multikilowatt class cryo storage RFC's, and the armoring methodology and meteoroid models used, storage should be fragmented into small individual tanks. Larger installations (more fuel) pay less of a shielding penalty than small installations. For the same survival probability over the same time period, larger volumes will require less armoring mass per unit volume protected.

  17. Analysis of capture-recapture models with individual covariates using data augmentation

    USGS Publications Warehouse

    Royle, J. Andrew

    2009-01-01

    I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.

  18. Modeling the effect of reward amount on probability discounting.

    PubMed

    Myerson, Joel; Green, Leonard; Morris, Joshua

    2011-03-01

    The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received.

  19. Probability of the moiré effect in barrier and lenticular autostereoscopic 3D displays.

    PubMed

    Saveljev, Vladimir; Kim, Sung-Kyu

    2015-10-05

    The probability of the moiré effect in LCD displays is estimated as a function of angle based on the experimental data; a theoretical function (node spacing) is proposed basing on the distance between nodes. Both functions are close to each other. The connection between the probability of the moiré effect and the Thomae's function is also found. The function proposed in this paper can be used in the minimization of the moiré effect in visual displays, especially in autostereoscopic 3D displays.

  20. Forecasting neutrino masses from combining KATRIN and the CMB observations: Frequentist and Bayesian analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Host, Ole; Lahav, Ofer; Abdalla, Filipe B.

    We present a showcase for deriving bounds on the neutrino masses from laboratory experiments and cosmological observations. We compare the frequentist and Bayesian bounds on the effective electron neutrino mass m{sub {beta}} which the KATRIN neutrino mass experiment is expected to obtain, using both an analytical likelihood function and Monte Carlo simulations of KATRIN. Assuming a uniform prior in m{sub {beta}}, we find that a null result yields an upper bound of about 0.17 eV at 90% confidence in the Bayesian analysis, to be compared with the frequentist KATRIN reference value of 0.20 eV. This is a significant difference whenmore » judged relative to the systematic and statistical uncertainties of the experiment. On the other hand, an input m{sub {beta}}=0.35 eV, which is the KATRIN 5{sigma} detection threshold, would be detected at virtually the same level. Finally, we combine the simulated KATRIN results with cosmological data in the form of present (post-WMAP) and future (simulated Planck) observations. If an input of m{sub {beta}}=0.2 eV is assumed in our simulations, KATRIN alone excludes a zero neutrino mass at 2.2{sigma}. Adding Planck data increases the probability of detection to a median 2.7{sigma}. The analysis highlights the importance of combining cosmological and laboratory data on an equal footing.« less

  1. The increase in physical performance and gain in lean and fat mass occur in prepubertal children independent of mode of school transportation. One year data from the prospective controlled Pediatric Osteoporosis Prevention (POP) Study

    PubMed Central

    2009-01-01

    Background The aim of this 12-month study in pre-pubertal children was to evaluate the effect of school transportation on gain in lean and fat mass, muscle strength and physical performance. Methods Ninety-seven girls and 133 boys aged 7-9 years from the Malmö Pediatric Osteoporosis Prevention Study were included. Regional lean and fat mass were assessed by dual energy X-ray absorptiometry, isokinetic peak torque of knee extensors and flexors by a computerised dynamometer and physical performance by vertical jump height. Level of physical activity was assessed by accelerometers. The 12-month changes in children who walked or cycled to school were compared with changes in those who travelled by bus or car. Results There were no differences in baseline or annual changes in lean or fat mass gain, muscle strength or physical performance between the two groups. All children reached the internationally recommended level of 60 minutes per day of moderate or high physical activity by accelerometers. Conclusion The choice of school transportation in pre-pubertal children seems not to influence the gain in lean and fat mass, muscle strength or functional ability, probably as the everyday physical activity is so high that the mode of school transportation contributes little to the total level of activity.

  2. The extra-atmospheric mass of small meteoroids of the Prairie and Canada bolide camera networks

    NASA Astrophysics Data System (ADS)

    Popelenskaya, N. V.; Stulov, V. P.

    2008-04-01

    The existing methods for determining the extra-atmospheric mass of meteor bodies from observations of their movement in the atmosphere allow a certain arbitrariness. Active attempts to overcome the discrepancy between the results of calculations based on different approaches often lead to physically incorrect conclusions. A way out is to laboriously accumulate the estimates and computation results and to consistently remove ambiguities. To correctly interpret the observed brightness of a meteor, one should use contemporary methods and the results of physical studies of the emitting gas. In the present work, the extra-atmospheric masses of small meteoroids of the Prairie and Canada bolide camera networks were calculated from the observed braking. It turned out that, in many cases, the conditions of movement of meteor bodies in the atmosphere corresponded to a free molecular airflow about a body. The so-called dynamic mass of the bodies was estimated from the real densities of the meteoroid material, which corresponded to monolithic water ice and stone, and for the proper values of the product of the drag coefficient and shape factor. When producing the trial function for the body trajectories in the "velocity-altitude" variables, we did not allow for fragmentation explicitly, since it is less probable for small meteoroids than for large ones. As before, our estimates differ substantially from the photometric masses published in the corresponding tables.

  3. A risk function for behavioral disruption of Blainville's beaked whales (Mesoplodon densirostris) from mid-frequency active sonar.

    PubMed

    Moretti, David; Thomas, Len; Marques, Tiago; Harwood, John; Dilley, Ashley; Neales, Bert; Shaffer, Jessica; McCarthy, Elena; New, Leslie; Jarvis, Susan; Morrissey, Ronald

    2014-01-01

    There is increasing concern about the potential effects of noise pollution on marine life in the world's oceans. For marine mammals, anthropogenic sounds may cause behavioral disruption, and this can be quantified using a risk function that relates sound exposure to a measured behavioral response. Beaked whales are a taxon of deep diving whales that may be particularly susceptible to naval sonar as the species has been associated with sonar-related mass stranding events. Here we derive the first empirical risk function for Blainville's beaked whales (Mesoplodon densirostris) by combining in situ data from passive acoustic monitoring of animal vocalizations and navy sonar operations with precise ship tracks and sound field modeling. The hydrophone array at the Atlantic Undersea Test and Evaluation Center, Bahamas, was used to locate vocalizing groups of Blainville's beaked whales and identify sonar transmissions before, during, and after Mid-Frequency Active (MFA) sonar operations. Sonar transmission times and source levels were combined with ship tracks using a sound propagation model to estimate the received level (RL) at each hydrophone. A generalized additive model was fitted to data to model the presence or absence of the start of foraging dives in 30-minute periods as a function of the corresponding sonar RL at the hydrophone closest to the center of each group. This model was then used to construct a risk function that can be used to estimate the probability of a behavioral change (cessation of foraging) the individual members of a Blainville's beaked whale population might experience as a function of sonar RL. The function predicts a 0.5 probability of disturbance at a RL of 150 dBrms re µPa (CI: 144 to 155) This is 15dB lower than the level used historically by the US Navy in their risk assessments but 10 dB higher than the current 140 dB step-function.

  4. A Risk Function for Behavioral Disruption of Blainville’s Beaked Whales (Mesoplodon densirostris) from Mid-Frequency Active Sonar

    PubMed Central

    Moretti, David; Thomas, Len; Marques, Tiago; Harwood, John; Dilley, Ashley; Neales, Bert; Shaffer, Jessica; McCarthy, Elena; New, Leslie; Jarvis, Susan; Morrissey, Ronald

    2014-01-01

    There is increasing concern about the potential effects of noise pollution on marine life in the world’s oceans. For marine mammals, anthropogenic sounds may cause behavioral disruption, and this can be quantified using a risk function that relates sound exposure to a measured behavioral response. Beaked whales are a taxon of deep diving whales that may be particularly susceptible to naval sonar as the species has been associated with sonar-related mass stranding events. Here we derive the first empirical risk function for Blainville’s beaked whales (Mesoplodon densirostris) by combining in situ data from passive acoustic monitoring of animal vocalizations and navy sonar operations with precise ship tracks and sound field modeling. The hydrophone array at the Atlantic Undersea Test and Evaluation Center, Bahamas, was used to locate vocalizing groups of Blainville’s beaked whales and identify sonar transmissions before, during, and after Mid-Frequency Active (MFA) sonar operations. Sonar transmission times and source levels were combined with ship tracks using a sound propagation model to estimate the received level (RL) at each hydrophone. A generalized additive model was fitted to data to model the presence or absence of the start of foraging dives in 30-minute periods as a function of the corresponding sonar RL at the hydrophone closest to the center of each group. This model was then used to construct a risk function that can be used to estimate the probability of a behavioral change (cessation of foraging) the individual members of a Blainville’s beaked whale population might experience as a function of sonar RL. The function predicts a 0.5 probability of disturbance at a RL of 150dBrms re µPa (CI: 144 to 155) This is 15dB lower than the level used historically by the US Navy in their risk assessments but 10 dB higher than the current 140 dB step-function. PMID:24465477

  5. Stellar mass spectrum within massive collapsing clumps. I. Influence of the initial conditions

    NASA Astrophysics Data System (ADS)

    Lee, Yueh-Ning; Hennebelle, Patrick

    2018-04-01

    Context. Stars constitute the building blocks of our Universe, and their formation is an astrophysical problem of great importance. Aim. We aim to understand the fragmentation of massive molecular star-forming clumps and the effect of initial conditions, namely the density and the level of turbulence, on the resulting distribution of stars. For this purpose, we conduct numerical experiments in which we systematically vary the initial density over four orders of magnitude and the turbulent velocity over a factor ten. In a companion paper, we investigate the dependence of this distribution on the gas thermodynamics. Methods: We performed a series of hydrodynamical numerical simulations using adaptive mesh refinement, with special attention to numerical convergence. We also adapted an existing analytical model to the case of collapsing clouds by employing a density probability distribution function (PDF) ∝ρ-1.5 instead of a lognormal distribution. Results: Simulations and analytical model both show two support regimes, a thermally dominated regime and a turbulence-dominated regime. For the first regime, we infer that dN/d logM ∝ M0, while for the second regime, we obtain dN/d logM ∝ M-3/4. This is valid up to about ten times the mass of the first Larson core, as explained in the companion paper, leading to a peak of the mass spectrum at 0.2 M⊙. From this point, the mass spectrum decreases with decreasing mass except for the most diffuse clouds, where disk fragmentation leads to the formation of objects down to the mass of the first Larson core, that is, to a few 10-2 M⊙. Conclusions: Although the mass spectra we obtain for the most compact clouds qualitatively resemble the observed initial mass function, the distribution exponent is shallower than the expected Salpeter exponent of - 1.35. Nonetheless, we observe a possible transition toward a slightly steeper value that is broadly compatible with the Salpeter exponent for masses above a few solar masses. This change in behavior is associated with the change in density PDF, which switches from a power-law to a lognormal distribution. Our results suggest that while gravitationally induced fragmentation could play an important role for low masses, it is likely the turbulently induced fragmentation that leads to the Salpeter exponent.

  6. Decision making generalized by a cumulative probability weighting function

    NASA Astrophysics Data System (ADS)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  7. Failure detection system risk reduction assessment

    NASA Technical Reports Server (NTRS)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  8. OGLE-2008-BLG-355Lb: A massive planet around a late-type star

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koshimoto, N.; Sumi, T.; Fukagawa, M.

    2014-06-20

    We report the discovery of a massive planet, OGLE-2008-BLG-355Lb. The light curve analysis indicates a planet:host mass ratio of q = 0.0118 ± 0.0006 at a separation of 0.877 ± 0.010 Einstein radii. We do not measure a significant microlensing parallax signal and do not have high angular resolution images that could detect the planetary host star. Therefore, we do not have a direct measurement of the host star mass. A Bayesian analysis, assuming that all host stars have equal probability to host a planet with the measured mass ratio, implies a host star mass of M{sub h}=0.37{sub −0.17}{sup +0.30}more » M{sub ⊙} and a companion of mass M{sub P}=4.6{sub −2.2}{sup +3.7}M{sub J}, at a projected separation of r{sub ⊥}=1.70{sub −0.30}{sup +0.29} AU. The implied distance to the planetary system is D {sub L} = 6.8 ± 1.1 kpc. A planetary system with the properties preferred by the Bayesian analysis may be a challenge to the core accretion model of planet formation, as the core accretion model predicts that massive planets are far more likely to form around more massive host stars. This core accretion model prediction is not consistent with our Bayesian prior of an equal probability of host stars of all masses to host a planet with the measured mass ratio. Thus, if the core accretion model prediction is right, we should expect that follow-up high angular resolution observations will detect a host star with a mass in the upper part of the range allowed by the Bayesian analysis. That is, the host would probably be a K or G dwarf.« less

  9. Clinical value of natriuretic peptides in chronic kidney disease.

    PubMed

    Santos-Araújo, Carla; Leite-Moreira, Adelino; Pestana, Manuel

    2015-01-01

    According to several lines of evidence, natriuretic peptides (NP) are the main components of a cardiac-renal axis that operate in clinical conditions of decreased cardiac hemodynamic tolerance to regulate sodium homeostasis, blood pressure and vascular function. Even though it is reasonable to assume that NP may exert a relevant role in the adaptive response to renal mass ablation, evidence gathered so far suggest that this contribution is probably complex and dependent on the type and degree of the functional mass loss. In the last years NP have been increasingly used to diagnose, monitor treatment and define the prognosis of several cardiovascular (CV) diseases. However, in many clinical settings, like chronic kidney disease (CKD), the predictive value of these biomarkers has been questioned. In fact, it is now well established that renal function significantly affects the plasmatic levels of NP and that renal failure is the clinical condition associated with the highest plasmatic levels of these peptides. The complexity of the relation between NP plasmatic levels and CV and renal functions has obvious consequences, as it may limit the predictive value of NP in CV assessment of CKD patients and be a demanding exercise for clinicians involved in the daily management of these patients. This review describes the role of NP in the regulatory response to renal function loss and addresses the main factors involved in the clinical valorization of the peptides in the context of significant renal failure. Copyright © 2015 The Authors. Published by Elsevier España, S.L.U. All rights reserved.

  10. Galaxy groups

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brent Tully, R.

    2015-02-01

    Galaxy groups can be characterized by the radius of decoupling from cosmic expansion, the radius of the caustic of second turnaround, and the velocity dispersion of galaxies within this latter radius. These parameters can be a challenge to measure, especially for small groups with few members. In this study, results are gathered pertaining to particularly well-studied groups over four decades in group mass. Scaling relations anticipated from theory are demonstrated and coefficients of the relationships are specified. There is an update of the relationship between light and mass for groups, confirming that groups with mass of a few times 10{supmore » 12}M{sub ⊙} are the most lit up while groups with more and less mass are darker. It is demonstrated that there is an interesting one-to-one correlation between the number of dwarf satellites in a group and the group mass. There is the suggestion that small variations in the slope of the luminosity function in groups are caused by the degree of depletion of intermediate luminosity systems rather than variations in the number per unit mass of dwarfs. Finally, returning to the characteristic radii of groups, the ratio of first to second turnaround depends on the dark matter and dark energy content of the universe and a crude estimate can be made from the current observations of Ω{sub matter}∼0.15 in a flat topology, with a 68% probability of being less than 0.44.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hesheng, E-mail: hesheng@umich.edu; Feng, Mary; Jackson, Andrew

    Purpose: To develop a local and global function model in the liver based on regional and organ function measurements to support individualized adaptive radiation therapy (RT). Methods and Materials: A local and global model for liver function was developed to include both functional volume and the effect of functional variation of subunits. Adopting the assumption of parallel architecture in the liver, the global function was composed of a sum of local function probabilities of subunits, varying between 0 and 1. The model was fit to 59 datasets of liver regional and organ function measures from 23 patients obtained before, during, andmore » 1 month after RT. The local function probabilities of subunits were modeled by a sigmoid function in relating to MRI-derived portal venous perfusion values. The global function was fitted to a logarithm of an indocyanine green retention rate at 15 minutes (an overall liver function measure). Cross-validation was performed by leave-m-out tests. The model was further evaluated by fitting to the data divided according to whether the patients had hepatocellular carcinoma (HCC) or not. Results: The liver function model showed that (1) a perfusion value of 68.6 mL/(100 g · min) yielded a local function probability of 0.5; (2) the probability reached 0.9 at a perfusion value of 98 mL/(100 g · min); and (3) at a probability of 0.03 [corresponding perfusion of 38 mL/(100 g · min)] or lower, the contribution to global function was lost. Cross-validations showed that the model parameters were stable. The model fitted to the data from the patients with HCC indicated that the same amount of portal venous perfusion was translated into less local function probability than in the patients with non-HCC tumors. Conclusions: The developed liver function model could provide a means to better assess individual and regional dose-responses of hepatic functions, and provide guidance for individualized treatment planning of RT.« less

  12. The pair and major merger history of galaxies up to z=6 over 3 square degrees

    NASA Astrophysics Data System (ADS)

    Conselice, Christopher; Mundy, Carl; Duncan, Kenneth

    2017-01-01

    A major goal in extragalactic astronomy is understanding how stars and gas are put into galaxies. As such we present the pair fraction and derived major merger and stellar mass assembly histories of galaxies up to z = 6. We do this using new techniques from photometric redshift probability distribution functions, and state of the art deep near-infrared data from the UDS, VIDEO and UltraVISTA COSMOS fields for galaxies at z < 3, and CANDELS data for galaxies at 3 < z < 6. We find that major mergers at high redshift are not the dominant mode of placing stars into galaxies, but that star formation is a more important process by factors of 10 or higher. At z < 3 major mergers will at most double the masses of galaxies, depending on the stellar mass or number density selection method. At z < 1 we find that major mergers deposit more stellar mass into galaxies than star formation, the reverse of the process seen at higher redshifts. However, at z > 1 there must be a very important unknown mode of baryonic acquisition within galaxies that is not associated with major mergers. We further discuss how the merger history stays relatively constant at higher redshifts, and show the comparison of our results to theoretical predictions.

  13. A new application of hierarchical cluster analysis to investigate organic peaks in bulk mass spectra obtained with an Aerodyne Aerosol Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Middlebrook, A. M.; Marcolli, C.; Canagaratna, M. R.; Worsnop, D. R.; Bahreini, R.; de Gouw, J. A.; Warneke, C.; Goldan, P. D.; Kuster, W. C.; Williams, E. J.; Lerner, B. M.; Roberts, J. M.; Meagher, J. F.; Fehsenfeld, F. C.; Marchewka, M. L.; Bertman, S. B.

    2006-12-01

    We applied hierarchical cluster analysis to an Aerodyne aerosol mass spectrometer (AMS) bulk mass spectral dataset collected aboard the NOAA research vessel Ronald H. Brown during the 2002 New England Air Quality Study off the east coast of the United States. Emphasizing the organic peaks, the cluster analysis yielded a series of categories that are distinguishable with respect to their mass spectra and their occurrence as a function of time. The differences between the categories mainly arise from relative intensity changes rather than from the presence or absence of specific peaks. The most frequent category exhibits a strong signal at m/z 44 and represents oxidized organic matter probably originating from both anthropogenic as well as biogenic sources. On the basis of spectral and trace gas correlations, the second most common category with strong signals at m/z 29, 43, and 44 contains contributions from isoprene oxidation products. The third through the fifth most common categories have peak patterns characteristic of monoterpene oxidation products and were most frequently observed when air masses from monoterpene rich regions were sampled. Taken together, the second through the fifth most common categories represent on average 17% of the total organic mass that stems likely from biogenic sources during the ship's cruise. These numbers have to be viewed as lower limits since the most common category was attributed to anthropogenic sources for this calculation. The cluster analysis was also very effective in identifying a few contaminated mass spectra that were not removed during pre-processing. This study demonstrates that hierarchical clustering is a useful tool to analyze the complex patterns of the organic peaks in bulk aerosol mass spectra from a field study.

  14. Cluster Analysis of the Organic Peaks in Bulk Mass Spectra Obtained During the 2002 New England Air Quality Study with an Aerodyne Aerosol Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Marcolli, C.; Canagaratna, M. R.; Worsnop, D. R.; Bahreini, R.; de Gouw, J. A.; Warneke, C.; Goldan, P. D.; Kuster, W. C.; Williams, E. J.; Lerner, B. M.; Roberts, J. M.; Meagher, J. F.; Fehsenfeld, F. C.; Marchewka, M.; Bertman, S. B.; Middlebrook, A. M.

    2006-12-01

    We applied hierarchical cluster analysis to an Aerodyne aerosol mass spectrometer (AMS) bulk mass spectral dataset collected aboard the NOAA research vessel R. H. Brown during the 2002 New England Air Quality Study off the east coast of the United States. Emphasizing the organic peaks, the cluster analysis yielded a series of categories that are distinguishable with respect to their mass spectra and their occurrence as a function of time. The differences between the categories mainly arise from relative intensity changes rather than from the presence or absence of specific peaks. The most frequent category exhibits a strong signal at m/z 44 and represents oxidized organic matter probably originating from both anthropogenic as well as biogenic sources. On the basis of spectral and trace gas correlations, the second most common category with strong signals at m/z 29, 43, and 44 contains contributions from isoprene oxidation products. The third through the fifth most common categories have peak patterns characteristic of monoterpene oxidation products and were most frequently observed when air masses from monoterpene rich regions were sampled. Taken together, the second through the fifth most common categories represent on average 17% of the total organic mass that stems likely from biogenic sources during the ship's cruise. These numbers have to be viewed as lower limits since the most common category was attributed to anthropogenic sources for this calculation. The cluster analysis was also very effective in identifying a few contaminated mass spectra that were not removed during pre-processing. This study demonstrates that hierarchical clustering is a useful tool to analyze the complex patterns of the organic peaks in bulk aerosol mass spectra from a field study.

  15. Organic Over-the-Horizon Targeting for the 2025 Surface Fleet

    DTIC Science & Technology

    2015-06-01

    Detection Phit Probability of Hit Pk Probability of Kill PLAN People’s Liberation Army Navy PMEL Pacific Marine Environmental Laboratory...probability of hit ( Phit ). 2. Top-Level Functional Flow Block Diagram With the high-level functions of the project’s systems of systems properly

  16. Incidence Rates of Sexual Harassment in Mass Communications Internship Programs: An Initial Study Comparing Intern, Student, and Professional Rates.

    ERIC Educational Resources Information Center

    Bowen, Michelle; Laurion, Suzanne

    A study documented, using a telephone survey, the incidence rates of sexual harassment of mass communication interns, and compared those rates to student and professional rates. A probability sample of 44 male and 52 female mass communications professionals was generated using several random sampling techniques from among professionals who work in…

  17. VizieR Online Data Catalog: Adiabatic mass loss in binary stars. II. (Ge+, 2015)

    NASA Astrophysics Data System (ADS)

    Ge, H.; Webbink, R. F.; Chen, X.; Han, Z.

    2016-02-01

    In the limit of extremely rapid mass transfer, the response of a donor star in an interacting binary becomes asymptotically one of adiabatic expansion. We survey here adiabatic mass loss from Population I stars (Z=0.02) of mass 0.10M⊙-100M⊙ from the zero-age main sequence to the base of the giant branch, or to central hydrogen exhaustion for lower main sequence stars. The logarithmic derivatives of radius with respect to mass along adiabatic mass-loss sequences translate into critical mass ratios for runaway (dynamical timescale) mass transfer, evaluated here under the assumption of conservative mass transfer. For intermediate- and high-mass stars, dynamical mass transfer is preceded by an extended phase of thermal timescale mass transfer as the star is stripped of most of its envelope mass. The critical mass ratio qad (throughout this paper, we follow the convention of defining the binary mass ratio as q{equiv}Mdonor/Maccretor) above which this delayed dynamical instability occurs increases with advancing evolutionary age of the donor star, by ever-increasing factors for more massive donors. Most intermediate- or high-mass binaries with nondegenerate accretors probably evolve into contact before manifesting this instability. As they approach the base of the giant branch, however, and begin developing a convective envelope, qad plummets dramatically among intermediate-mass stars, to values of order unity, and a prompt dynamical instability occurs. Among low-mass stars, the prompt instability prevails throughout main sequence evolution, with qad declining with decreasing mass, and asymptotically approaching qad=2/3, appropriate to a classical isentropic n=3/2 polytrope. Our calculated qad values agree well with the behavior of time-dependent models by Chen & Han (2003MNRAS.341..662C) of intermediate-mass stars initiating mass transfer in the Hertzsprung gap. Application of our results to cataclysmic variables, as systems that must be stable against rapid mass transfer, nicely circumscribes the range in qad as a function of the orbital period in which they are found. These results are intended to advance the verisimilitude of population synthesis models of close binary evolution. (3 data files).

  18. Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.

    PubMed

    Fennell, John; Baddeley, Roland

    2012-10-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  19. Practical differences among probabilities, possibilities, and credibilities

    NASA Astrophysics Data System (ADS)

    Grandin, Jean-Francois; Moulin, Caroline

    2002-03-01

    This paper presents some important differences that exist between theories, which allow the uncertainty management in data fusion. The main comparative results illustrated in this paper are the followings: Incompatibility between decisions got from probabilities and credibilities is highlighted. In the dynamic frame, as remarked in [19] or [17], belief and plausibility of Dempster-Shafer model do not frame the Bayesian probability. This framing can however be obtained by the Modified Dempster-Shafer approach. It also can be obtained in the Bayesian framework either by simulation techniques, or with a studentization. The uncommitted in the Dempster-Shafer way, e.g. the mass accorded to the ignorance, gives a mechanism similar to the reliability in the Bayesian model. Uncommitted mass in Dempster-Shafer theory or reliability in Bayes theory act like a filter that weakens extracted information, and improves robustness to outliners. So, it is logical to observe on examples like the one presented particularly by D.M. Buede, a faster convergence of a Bayesian method that doesn't take into account the reliability, in front of Dempster-Shafer method which uses uncommitted mass. But, on Bayesian masses, if reliability is taken into account, at the same level that the uncommited, e.g. F=1-m, we observe an equivalent rate for convergence. When Dempster-Shafer and Bayes operator are informed by uncertainty, faster or lower convergence can be exhibited on non Bayesian masses. This is due to positive or negative synergy between information delivered by sensors. This effect is a direct consequence of non additivity when considering non Bayesian masses. Unknowledge of the prior in bayesian techniques can be quickly compensated by information accumulated as time goes on by a set of sensors. All these results are presented on simple examples, and developed when necessary.

  20. Mass Movement and Landform Degradation on the Icy Galilean Satellites: Results of the Galileo Nominal Mission

    NASA Technical Reports Server (NTRS)

    Moore, Jeffrey M.; Asphaug, Erik; Morrison, David; Spencer, John R.; Chapman, Clark R.; Bierhaus, Beau; Sullivan, Robert J.; Chuang, Frank C.; Klemaszewski, James E.; Greeley, Ronald

    1999-01-01

    The Galileo mission has revealed remarkable evidence of mass movement and landform degradation on the icy Galilean satellites of Jupiter. Weakening of surface materials coupled with mass movement reduces the topographic relief of landforms by moving surface materials down-slope. Throughout the Galileo orbiter nominal mission we have studied all known forms of mass movement and landform degradation of the icy galilean satellites, of which Callisto, by far, displays the most degraded surface. Callisto exhibits discrete mass movements that are larger and apparently more common than seen elsewhere. Most degradation on Ganymede appears consistent with sliding or slumping, impact erosion, and regolith evolution. Sliding or slumping is also observed at very small (100 m) scale on Europa. Sputter ablation, while probably playing some role in the evolution of Ganymede's and Callisto's debris layers, appears to be less important than other processes. Sputter ablation might play a significant role on Europa only if that satellite's surface is significantly older than 10(exp 8) years, far older than crater statistics indicate. Impact erosion and regolith formation on Europa are probably minimal, as implied by the low density of small craters there. Impact erosion and regolith formation may be important on the dark terrains of Ganymede, though some surfaces on this satellite may be modified by sublimation-degradation. While impact erosion and regolith formation are expected to operate with the same vigor on Callisto as on Ganymede, most of the areas examined at high resolution on Callisto have an appearance that implies that some additional process is at work, most likely sublimation-driven landform modification and mass wasting. The extent of surface degradation ascribed to sublimation on the outer two Galilean satellites implies that an ice more volatile than H2O is probably involved.

  1. Dialysate cancer antigen 125 concentration as marker of peritoneal membrane status in patients treated with chronic peritoneal dialysis.

    PubMed

    Krediet, R T

    2001-01-01

    This study reviews publications on the history of cancer antigen 125 (CA125), the background of its use as a marker of mesothelial cell mass, determination in peritoneal effluent, and its practical use in both the follow-up of peritoneal dialysis (PD) patients and as a marker of in vivo biocompatibility of dialysis solutions. Review article. CA125 is a high molecular weight glycoprotein. Previous studies in ascites suggested its release by mesothelial cells. In vitro studies with cultured mesothelial cells showed constitutive production, the majority of which was dependent on mesothelial cell mass. Serum CA125 is normal in PD patients, but its concentration in peritoneal dialysate suggests local release, probably from mesothelial cells. Effluent CA125 can be considered a marker of mesothelial cell mass in stable PD patients, but large amounts are found during peritonitis, due probably to necrosis of mesothelial cells. The majority of studies found no relationship between dialysate CA125 and peritoneal transport parameters. Some cross-sectional studies reported a relationship with duration of PD, but others were unable to confirm this, due probably to the large interindividual variability. Longitudinal follow-up has shown a decrease in dialysate CA125, indicating loss of mesothelial cell mass. Application of theoretically more-biocompatible PD solutions causes an increase in dialysate CA125. Dialysate CA125 is a mesothelial cell mass marker. The concentration of CA125 should be determined after a standardized dwell. A single low value is not informative. A decrease with time on PD suggests loss of mesothelial cell mass. Dialysate CA125 is a marker of in vivo biocompatibility of (new) dialysis solutions. More research is necessary on the best methodology for measuring low concentrations and establishing normal values and a significant change.

  2. Evaluation of the Three Parameter Weibull Distribution Function for Predicting Fracture Probability in Composite Materials

    DTIC Science & Technology

    1978-03-01

    for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented

  3. Plate Tectonics on Earth-like Planets: Implications for Habitability

    NASA Astrophysics Data System (ADS)

    Noack, L.; Breuer, D.

    2011-12-01

    Plate tectonics has been suggested to be essential for life (see e.g. [1]) due to the replenishment of nutrients and its role in the stabilization of the atmosphere temperature through the carbon-silicate cycle. Whether plate tectonics can prevail on a planet should depend on several factors, e.g. planetary mass, age of the planet, water content (at the surface and in the interior), surface temperature, mantle rheology, density variations in the mantle due to partial melting, and life itself by promoting erosion processes and perhaps even the production of continental rock [2]. In the present study, we have investigated how planetary mass, internal heating, surface temperature and water content in the mantle would factor for the probability of plate tectonics to occur on a planet. We allow the viscosity to be a function of pressure [3], an effect mostly neglected in previous discussions of plate tectonics on exoplanets [4, 5]. With the pressure-dependence of viscosity allowed for, the lower mantle may become too viscous in massive planets for convection to occur. When varying the planetary mass between 0.1 and 10 Earth masses, we find a maximum for the likelihood of plate tectonics to occur for planetary masses around a few Earth masses. For these masses the convective stresses acting at the base of the lithosphere are strongest and may become larger than the lithosphere yield strength. The optimum planetary mass varies slightly depending on the parameter values used (e.g. wet or dry rheology; initial mantle temperature). However, the peak in likelihood of plate tectonics remains roughly in the range of one to five Earth masses for reasonable parameter choices. Internal heating has a similar effect on the occurrence of plate tectonics as the planetary mass, i.e. there is a peak in the probability of plate tectonics depending on the internal heating rate. This result suggests that a planet may evolve as a consequence of radioactive decay into and out of the plate tectonics regime. References [1] Parnell, J. (2004): Plate tectonics, surface mineralogy, and the early evolution of life. Int. J. Astrobio. 3(2): 131-137. [2] Rosing, M.T.; D.K. Bird, N.H. Sleep, W. Glassley, and F. Albar (2006): The rise of continents - An essay on the geologic consequences of photosynthesis. Palaeogeography, Palaeoclimatology, Palaeoecology 232 (2006) 99-11. [3] Stamenkovic, V.; D. Breuer and T. Spohn (2011): Thermal and transport properties of mantle rock at high pressure: Applications to super-Earths. Submitted to Icarus. [4] Valencia, D., R.J. O'Connell and D.D. Sasselov (2007): Inevitability of plate tectonics on super-Earths. Astrophys. J. Let. 670(1): 45-48. [5] O'Neill, C. and A. Lenardic (2007). Geological consequences of super-sized Earths. GRL 34: 1-41.

  4. Decomposition of conditional probability for high-order symbolic Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  5. Decomposition of conditional probability for high-order symbolic Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  6. A simplified model for the assessment of the impact probability of fragments.

    PubMed

    Gubinelli, Gianfilippo; Zanelli, Severino; Cozzani, Valerio

    2004-12-31

    A model was developed for the assessment of fragment impact probability on a target vessel, following the collapse and fragmentation of a primary vessel due to internal pressure. The model provides the probability of impact of a fragment with defined shape, mass and initial velocity on a target of a known shape and at a given position with respect to the source point. The model is based on the ballistic analysis of the fragment trajectory and on the determination of impact probabilities by the analysis of initial direction of fragment flight. The model was validated using available literature data.

  7. OPTICAL PHOTOMETRIC AND POLARIMETRIC INVESTIGATION OF NGC 1931

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandey, A. K.; Eswaraiah, C.; Sharma, Saurabh

    We present optical photometric and polarimetric observations of stars toward NGC 1931 with the aim of deriving cluster parameters such as distance, reddening, age, and luminosity/mass function as well as understanding dust properties and star formation in the region. The distance to the cluster is found to be 2.3 {+-} 0.3 kpc and the reddening E(B - V) in the region is found to be variable. The stellar density contours reveal two clusters in the region. The observations suggest a differing reddening law within the cluster region. Polarization efficiency of the dust grains toward the direction of the cluster ismore » found to be less than that for the general diffuse interstellar medium (ISM). The slope of the mass function (-0.98 {+-} 0.22) in the southern region in the mass range of 0.8 < M/M {sub Sun} < 9.8 is found to be shallower in comparison to that in the northern region (-1.26 {+-} 0.23), which is comparable to the Salpeter value (-1.35). The K-band luminosity function (KLF) of the region is found to be comparable to the average value of the slope ({approx}0.4) for young clusters obtained by Lada and Lada; however, the slope of the KLF is steeper in the northern region as compared to the southern region. The region is probably ionized by two B2 main-sequence-type stars. The mean age of the young stellar objects (YSOs) is found to be 2 {+-} 1 Myr, which suggests that the identified YSOs could be younger than the ionizing sources of the region. The morphology of the region, the distribution and ages of the YSOs, and ionizing sources indicate a triggered star formation in the region.« less

  8. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  9. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  10. Evolution of axis ratios from phase space dynamics of triaxial collapse

    NASA Astrophysics Data System (ADS)

    Nadkarni-Ghosh, Sharvari; Arya, Bhaskar

    2018-04-01

    We investigate the evolution of axis ratios of triaxial haloes using the phase space description of triaxial collapse. In this formulation, the evolution of the triaxial ellipsoid is described in terms of the dynamics of eigenvalues of three important tensors: the Hessian of the gravitational potential, the tensor of velocity derivatives, and the deformation tensor. The eigenvalues of the deformation tensor are directly related to the parameters that describe triaxiality, namely, the minor-to-major and intermediate-to-major axes ratios (s and q) and the triaxiality parameter T. Using the phase space equations, we evolve the eigenvalues and examine the evolution of the probability distribution function (PDF) of the axes ratios as a function of mass scale and redshift for Gaussian initial conditions. We find that the ellipticity and prolateness increase with decreasing mass scale and decreasing redshift. These trends agree with previous analytic studies but differ from numerical simulations. However, the PDF of the scaled parameter {\\tilde{q}} = (q-s)/(1-s) follows a universal distribution over two decades in mass range and redshifts which is in qualitative agreement with the universality for conditional PDF reported in simulations. We further show using the phase space dynamics that, in fact, {\\tilde{q}} is a phase space invariant and is conserved individually for each halo. These results demonstrate that the phase space analysis is a useful tool that provides a different perspective on the evolution of perturbations and can be applied to more sophisticated models in the future.

  11. Study of Heavy-ion Induced Fission for Heavy Element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.; Ikezoe, H.; Hofmann, S.; Ackermann, D.; Aritomo, Y.; Comas, V. F.; Düllmann, Ch. E.; Heinz, S.; Heredia, J. A.; Heßberger, F. P.; Hirose, K.; Khuyagbaatar, J.; Kindler, B.; Kojouharov, I.; Lommel, B.; Makii, M.; Mann, R.; Mitsuoka, S.; Nishinaka, I.; Ohtsuki, T.; Saro, S.; Schädel, M.; Popeko, A. G.; Türler, A.; Wakabayashi, Y.; Watanabe, Y.; Yakushev, A.; Yeremin, A.

    2014-05-01

    Fission fragment mass distributions were measured in heavy-ion induced fission of 238U. The mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and quasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model for the reactions of 30Si+238U and 34S+238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections of 263,264Sg and 267,268Hs, produced by 30Si+238U and 34S+238U, respectively. It is also suggested that sub-barrier energies can be used for heavy element synthesis.

  12. Quantile Functions, Convergence in Quantile, and Extreme Value Distribution Theory.

    DTIC Science & Technology

    1980-11-01

    Gnanadesikan (1968). Quantile functions are advocated by Parzen (1979) as providing an approach to probability-based data analysis. Quantile functions are... Gnanadesikan , R. (1968). Probability Plotting Methods for the Analysis of Data, Biomtrika, 55, 1-17.

  13. Geotechnical Aspects of Rock Erosion in Emergency Spillway Channels. Report 5 Summary of Results, Conclusions and Recommendations

    DTIC Science & Technology

    1990-09-01

    channel. Erosion susceptibility, similar to spillway evaluation, must emphasize rock-mass rating or classification systems (e.g. rippability ) which, when...recommends site-specific "proof of concept" testing of an Erosion Probability Index (EPI) based on rock-mass rippability rating and lithostratigraphic...and rock-mass parameters that provide key input parameters to Weaver’s (1975) Rippability Rating (RR) scheme (or Bieniawski’s (1974) Rock Mass Rating

  14. The evolutionary ecology of decorating behaviour

    PubMed Central

    Ruxton, Graeme D.; Stevens, Martin

    2015-01-01

    Many animals decorate themselves through the accumulation of environmental material on their exterior. Decoration has been studied across a range of different taxa, but there are substantial limits to current understanding. Decoration in non-humans appears to function predominantly in defence against predators and parasites, although an adaptive function is often assumed rather than comprehensively demonstrated. It seems predominantly an aquatic phenomenon—presumably because buoyancy helps reduce energetic costs associated with carrying the decorative material. In terrestrial examples, decorating is relatively common in the larval stages of insects. Insects are small and thus able to generate the power to carry a greater mass of material relative to their own body weight. In adult forms, the need to be lightweight for flight probably rules out decoration. We emphasize that both benefits and costs to decoration are rarely quantified, and that costs should include those associated with collecting as well as carrying the material. PMID:26041868

  15. What can the occult do for you?

    NASA Astrophysics Data System (ADS)

    Holwerda, B. W.; Keel, W. C.

    2017-03-01

    Interstellar dust is still a dominant uncertainty in Astronomy, limiting precision in e.g., cosmological distance estimates and models of how light is re-processed within a galaxy. When a foreground galaxy serendipitously overlaps a more distant one, the latter backlights the dusty structures in the nearer foreground galaxy. Such an overlapping or occulting galaxy pair can be used to measure the distribution of dust in the closest galaxy with great accuracy. The STARSMOG program uses Hubble to map the distribution of dust in foreground galaxies in fine (<100 pc) detail. Integral Field Unit (IFU) observations will map the effective extinction curve, disentangling the role of fine-scale geometry and grain composition on the path of light through a galaxy. The overlapping galaxy technique promises to deliver a clear understanding of the dust in galaxies: geometry, a probability function of dimming as a function of galaxy mass and radius, and its dependence on wavelength.

  16. Modeling of turbulent supersonic H2-air combustion with a multivariate beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1993-01-01

    Recent calculations of turbulent supersonic reacting shear flows using an assumed multivariate beta PDF (probability density function) resulted in reduced production rates and a delay in the onset of combustion. This result is not consistent with available measurements. The present research explores two possible reasons for this behavior: use of PDF's that do not yield Favre averaged quantities, and the gradient diffusion assumption. A new multivariate beta PDF involving species densities is introduced which makes it possible to compute Favre averaged mass fractions. However, using this PDF did not improve comparisons with experiment. A countergradient diffusion model is then introduced. Preliminary calculations suggest this to be the cause of the discrepancy.

  17. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    PubMed

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  18. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  19. The use of gravimetric data from GRACE mission in the understanding of polar motion variations

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Nastula, J.; Bizouard, C.; Gambis, D.

    2009-08-01

    Tesseral coefficients C21 and S21 derived from Gravity Recovery and Climate Experiment (GRACE) observations allow to compute the mass term of the polar-motion excitation function. This independent estimation can improve the geophysical models and, in addition, determine the unmodelled phenomena. In this paper, we intend to validate the polar motion excitation derived from GRACE's last release (GRACE Release 4) computed by different institutes: GeoForschungsZentrum (GFZ), Postdam, Germany; Center for Space Research (CSR), Austin, USA; Jet Propulsion Laboratory (JPL), Pasadena, USA, and the Groupe de Recherche en Géodésie Spatiale (GRGS), Toulouse, France. For this purpose, we compare these excitations functions first to the mass term obtained from observed Earth's rotation variations free of the motion term and, second, to the mass term estimated from geophysical fluids models. We confirm the large improvement of the CSR solution, and we show that the GRGS estimate is also well correlated with the geodetic observations. Significant discrepancies exist between the solutions of each centre. The source of these differences is probably related to the data processing strategy. We also consider residuals computed after removing the geophysical models or the gravimetric solutions from the geodetic mass term. We show that the residual excitation based on models is smoother than the gravimetric data, which are still noisy. Still, they are comparable for the χ2 component. It appears that χ2 residual signals using GFZ and JPL data have less variability. Finally, for assessing the impact of the geophysical fluids models choice on our results, we checked two different oceanic excitation series. We show the significant differences in the residuals correlations, especially for the χ1 more sensitive to the oceanic signals.

  20. Consistent Simulation Framework for Efficient Mass Discharge and Source Depletion Time Predictions of DNAPL Contaminants in Heterogeneous Aquifers Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Koch, J.

    2014-12-01

    Predicting DNAPL fate and transport in heterogeneous aquifers is challenging and subject to an uncertainty that needs to be quantified. Models for this task needs to be equipped with an accurate source zone description, i.e., the distribution of mass of all partitioning phases (DNAPL, water, and soil) in all possible states ((im)mobile, dissolved, and sorbed), mass-transfer algorithms, and the simulation of transport processes in the groundwater. Such detailed models tend to be computationally cumbersome when used for uncertainty quantification. Therefore, a selective choice of the relevant model states, processes, and scales are both sensitive and indispensable. We investigate the questions: what is a meaningful level of model complexity and how to obtain an efficient model framework that is still physically and statistically consistent. In our proposed model, aquifer parameters and the contaminant source architecture are conceptualized jointly as random space functions. The governing processes are simulated in a three-dimensional, highly-resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. We apply a stochastic percolation approach as an emulator to simulate the contaminant source formation, a random walk particle tracking method to simulate DNAPL dissolution and solute transport within the aqueous phase, and a quasi-steady-state approach to solve for DNAPL depletion times. Using this novel model framework, we test whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. With this we identify that aquifer heterogeneity, groundwater flow irregularity, uncertain and physically-based contaminant source zones, and their mutual interlinkages are indispensable components of a sound model framework.

  1. Microwaves in chemistry: Another way of heating reaction mixtures

    NASA Astrophysics Data System (ADS)

    Berlan, J.

    1995-04-01

    The question of a possible "microwave activation" of chemical reaction is discussed. In fact two cases should be distinguished: homogeneous or heterogeneous reaction mixtures. In homogeneous mixtures there are no (or very low) rate enhancements compared to a conventional heating, but some influence on chemioselectivity has been observed. These effects derive from fast and mass heating of microwaves, and probably, especially under reflux, from different boiling rates and/or overheating. With heterogeneous mixtures non conventional effects probably derive from mass heating and selective overheating. This is illustrated with several reactions: Diels-Alder, naphthalene sulphonation, preparation of cyanuric acid, hydrolysis of nitriles, transposition reaction on solid support.

  2. Seven Golden Rules for heuristic filtering of molecular formulas obtained by accurate mass spectrometry

    PubMed Central

    Kind, Tobias; Fiehn, Oliver

    2007-01-01

    Background Structure elucidation of unknown small molecules by mass spectrometry is a challenge despite advances in instrumentation. The first crucial step is to obtain correct elemental compositions. In order to automatically constrain the thousands of possible candidate structures, rules need to be developed to select the most likely and chemically correct molecular formulas. Results An algorithm for filtering molecular formulas is derived from seven heuristic rules: (1) restrictions for the number of elements, (2) LEWIS and SENIOR chemical rules, (3) isotopic patterns, (4) hydrogen/carbon ratios, (5) element ratio of nitrogen, oxygen, phosphor, and sulphur versus carbon, (6) element ratio probabilities and (7) presence of trimethylsilylated compounds. Formulas are ranked according to their isotopic patterns and subsequently constrained by presence in public chemical databases. The seven rules were developed on 68,237 existing molecular formulas and were validated in four experiments. First, 432,968 formulas covering five million PubChem database entries were checked for consistency. Only 0.6% of these compounds did not pass all rules. Next, the rules were shown to effectively reducing the complement all eight billion theoretically possible C, H, N, S, O, P-formulas up to 2000 Da to only 623 million most probable elemental compositions. Thirdly 6,000 pharmaceutical, toxic and natural compounds were selected from DrugBank, TSCA and DNP databases. The correct formulas were retrieved as top hit at 80–99% probability when assuming data acquisition with complete resolution of unique compounds and 5% absolute isotope ratio deviation and 3 ppm mass accuracy. Last, some exemplary compounds were analyzed by Fourier transform ion cyclotron resonance mass spectrometry and by gas chromatography-time of flight mass spectrometry. In each case, the correct formula was ranked as top hit when combining the seven rules with database queries. Conclusion The seven rules enable an automatic exclusion of molecular formulas which are either wrong or which contain unlikely high or low number of elements. The correct molecular formula is assigned with a probability of 98% if the formula exists in a compound database. For truly novel compounds that are not present in databases, the correct formula is found in the first three hits with a probability of 65–81%. Corresponding software and supplemental data are available for downloads from the authors' website. PMID:17389044

  3. A -100 kV Power Supply for Ion Acceleration in Space-based Mass Spectrometers

    NASA Astrophysics Data System (ADS)

    Gilbert, J. A.; Zurbuchen, T.; Battel, S.

    2017-12-01

    High voltage power supplies are used in many space-based time-of-flight (TOF) mass spectrometer designs to accelerate incoming ions and increase the probability of their measurement and proper identification. Ions are accelerated in proportion to their charge state, so singly charged ions such as pickup ions are accelerated less than their multiple-charge state solar wind counterparts. This lack of acceleration results in pickup ion measurements with lower resolution and without determinations of absolute energy. Acceleration reduces the effects of angular scattering and energy straggling when ions pass through thin membranes such as carbon foils, and it brings ion energies above the detection threshold of traditional solid state detectors. We have developed a power supply capable of operating at -100 kV for ion acceleration while also delivering up to 10 W of power for the operation of a floating TOF system. We also show results of benchtop calibration and ion beam tests to demonstrate the functionality and success of this approach.

  4. The distribution of stars most likely to harbor intelligent life.

    PubMed

    Whitmire, Daniel P; Matese, John J

    2009-09-01

    Simple heuristic models and recent numerical simulations show that the probability of habitable planet formation increases with stellar mass. We combine those results with the distribution of main-sequence stellar masses to obtain the distribution of stars most likely to possess habitable planets as a function of stellar lifetime. We then impose the self-selection condition that intelligent observers can only find themselves around a star with a lifetime greater than the time required for that observer to have evolved, T(i). This allows us to obtain the stellar timescale number distribution for a given value of T(i). Our results show that for habitable planets with a civilization that evolved at time T(i) = 4.5 Gyr the median stellar lifetime is 13 Gyr, corresponding approximately to a stellar type of G5, with two-thirds of the stars having lifetimes between 7 and 30 Gyr, corresponding approximately to spectral types G0-K5. For other values of T(i) the median stellar lifetime changes by less than 50%.

  5. Scaffolds based on hyaluronan and carbon nanotubes gels.

    PubMed

    Arnal-Pastor, M; Tallà Ferrer, C; Herrero Herrero, M; Martínez-Gómez Aldaraví, A; Monleón Pradas, M; Vallés-Lluch, A

    2016-10-01

    Physico-chemical and mechanical properties of hyaluronic acid/carbon nanotubes nanohybrids have been correlated with the proportion of inorganic nanophase and the preparation procedure. The mass fraction of -COOH functionalized carbon nanotubes was varied from 0 to 0.05. Hyaluronic acid was crosslinked with divinyl sulfone to improve its stability in aqueous media and allow its handling as a hydrogel. A series of samples was dried by lyophilization to obtain porous scaffolds whereas another was room-dried allowing the collapse of the hybrid structures. The porosity of the former, together with the tighter packing of hyaluronic acid chains, results in a lower water absorption and lower mechanical properties in the swollen state, because of the easier water diffusion. The presence of even a small amount of carbon nanotubes (mass fraction of 0.05) limits even more the swelling of the matrix, owing probably to hybrid interactions. These nanohybrids do not seem to degrade significantly during 14 days in water or enzymatic medium. © The Author(s) 2016.

  6. Hypervelocity impact effects on solar cells

    NASA Technical Reports Server (NTRS)

    Rose, M. Frank

    1993-01-01

    One of the space hazards of concern is the problem of natural matter and space debris impacting spacecraft. This phenomena has been studied since the early sixties and a methodology has been established to determine the relative abundance of meteoroids as a function of mass. As the mass decreases, the probability of suffering collisions increases, resulting in a constant bombardment from particles in the sub-micron range. The composition of this 'cosmic dust' is primarily Fe, Ni, Al, Mg, Na, Ca, Cr, H, O, and Mn. In addition to mechanical damage, impact velocities greater than 5 k m/sec can produce shock induced ionization effects with resultant surface charging and complex chemical interactions. The upper limit of the velocity distribution for these particles is on the order of 70 km/sec. The purpose of this work was to subject samples from solar power arrays to debris flux typical of what would be encountered in space, and measure the degradation of the panels after impact.

  7. End point of a first-order phase transition in many-flavor lattice QCD at finite temperature and density.

    PubMed

    Ejiri, Shinji; Yamada, Norikazu

    2013-04-26

    Towards the feasibility study of the electroweak baryogenesis in realistic technicolor scenario, we investigate the phase structure of (2+N(f))-flavor QCD, where the mass of two flavors is fixed to a small value and the others are heavy. For the baryogenesis, an appearance of a first-order phase transition at finite temperature is a necessary condition. Using a set of configurations of two-flavor lattice QCD and applying the reweighting method, the effective potential defined by the probability distribution function of the plaquette is calculated in the presence of additional many heavy flavors. Through the shape of the effective potential, we determine the critical mass of heavy flavors separating the first-order and crossover regions and find it to become larger with N(f). We moreover study the critical line at finite density and the first-order region is found to become wider as increasing the chemical potential. Possible applications to real (2+1)-flavor QCD are discussed.

  8. Stars with relativistic speeds in the Hills scenario

    NASA Astrophysics Data System (ADS)

    Dremova, G. N.; Dremov, V. V.; Tutukov, A. V.

    2017-07-01

    The dynamical capture of a binary system consisting of a supermassive black hole (SMBH) and an ordinary star in the gravitational field of a central (more massive) SMBH is considered in the three-body problem in the framework of a modified Hills scenario. The results of numerical simulations predict the existence of objects whose spatial speeds are comparable to the speed of light. The conditions for and constraints imposed on the ejection speeds realized in a classical scenario and the modified Hills scenario are analyzed. The star is modeled using an N-body approach, making it possible to treat it as a structured object, enabling estimation of the probability that the object survives when it is ejected with relativistic speed as a function of the mass of the star, the masses of both SMBHs, and the pericenter distance. It is possible that the modern kinematic classification for stars with anomalously high spatial velocities will be augmented with a new class—stars with relativistic speeds.

  9. Geophysical searches for three-neutrino oscillations

    NASA Technical Reports Server (NTRS)

    Cudell, J. R.; Gaisser, T. K.

    1985-01-01

    The possibilities of using cosmic ray induced neutrinos to detect oscillations in deep underground experiments were considered. The matter effects are nonnegligible in the two neutrino case, they reduce a mixing angle of 45 deg to 7.5 deg for 1 GeV neutrinos of squared mass difference 10/4 eV59 going through the Earth making the oscillation totally unobservable. They produce a natural oscillation length of about 6000 km in the case of massless neutrinos. Adding a third neutrino flavor considerably modifies the oscillation pattern and suggests that scales down to 5 x 10/5 eV could be observed even when we take into account matter effects and the electron contribution to the incoming flux. The effect of matter on the probability curves for different cases are shown by varying the masses and the mixing matrix. The ratio upward upsilon + upsilon/downward upsilon + upsilon as a function of the zenith angle at Cleveland, neglecting angular smearing and energy threshold effects is predicted.

  10. THE RELATION BETWEEN GALAXY MORPHOLOGY AND ENVIRONMENT IN THE LOCAL UNIVERSE: AN RC3-SDSS PICTURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilman, David J.; Erwin, Peter

    2012-02-20

    We present results of an analysis of the local (z {approx} 0) morphology-environment relation for 911 bright (M{sub B} < -19) galaxies, based on matching classical RC3 morphologies with the Sloan Digital Sky Survey based group catalog of Yang et al., which includes halo mass estimates. This allows us to study how the relative fractions of spirals, lenticulars, and ellipticals depend on halo mass over a range of 10{sup 11.7}-10{sup 14.8} h{sup -1} M{sub Sun }, from isolated single-galaxy halos to massive groups and low-mass clusters. We pay particular attention to how morphology relates to central versus satellite status (wheremore » 'central' galaxies are the most massive within their halo). The fraction of galaxies which are elliptical is a strong function of stellar mass; it is also a strong function of halo mass, but only for central galaxies. We interpret this as evidence for a scenario where elliptical galaxies are always formed, probably via mergers, as central galaxies within their halos, with satellite ellipticals being previously central galaxies accreted onto a larger halo. The overall fraction of galaxies which are S0 increases strongly with halo mass, from {approx}10% to {approx}70%. Here, too, we find striking differences between the central and satellite populations. 20% {+-} 2% of central galaxies with stellar masses M{sub *} > 10{sup 10.5} M{sub Sun} are S0 regardless of halo mass, but satellite S0 galaxies are only found in massive (>10{sup 13} h{sup -1} M{sub Sun }) halos, where they are 69% {+-} 4% of the M{sub *} > 10{sup 10.5} M{sub Sun} satellite population. This suggests two channels for forming S0 galaxies: one which operates for central galaxies and another which transforms lower-mass (M{sub *} {approx}< 10{sup 11} M{sub Sun }) accreted spirals into satellite S0 galaxies in massive halos. Analysis of finer morphological structure (bars and rings in disk galaxies) shows some trends with stellar mass, but none with halo mass; this is consistent with other recent studies which indicate that bars are not strongly influenced by galaxy environment. Radio sources in high-mass central galaxies are common, similarly so for elliptical and S0 galaxies, with a frequency that increases with the halo mass. Emission-line active galactic nuclei (mostly LINERs) are more common in S0s, but show no strong trends with environment.« less

  11. The low-mass star and disk populations in NGC 6611

    NASA Astrophysics Data System (ADS)

    Oliveira, Joana

    2005-07-01

    The aim of our observational program is to find empirical answers to two major questions. Do regions of high-mass star formation also produce lots of solar- and low-mass stars, i.e. is the low-mass IMF unaffected by high-mass siblings? Can low-mass stars in hostile environments retain circumstellar disks? We present results of our survey of NGC 6611, a massive cluster with an age of approximately 2 Myr which is currently ionizing the Eagle nebula. This cluster contains a dozen O-stars that emit 10 times more ionizing radiation than the Trapezium, providing a challenging environment for their lower-mass siblings. Our dataset consists of wide field optical and near infrared imaging, intermediate resolution spectroscopy (ESO-VLT) and deep L-band photometry. We have photometrically selected solar- and low-mass stars, placed them on the HR diagram and determined the IMF over an area sufficient to deal with mass segregation. We show that the IMF in NGC6611 is similar to that of the Orion Nebula Cluster down to 0.5Msun. Using K-L indices we search for colour excesses that betray the presence of circumstellar material and study what fraction of solar-mass stars still possess disks as a function of age and proximity to the massive stars. By comparing the disk frequency in NGC6611 with similarly aged but quieter regions, we find no evidence that the harsher environment of NGC6611 significantly hastens disk dissipation. Apparently the massive stars in NGC6611 have no global effect on the probability of low-mass star formation or disk retention. We have an approved HST program that will allows us to investigate the very low-mass and brown dwarf populations in NGC6611. And we complement our IR imaging with Spitzer/ORAC data, extending the area of our ground-based survey.

  12. Evidence for methane and ammonia in the coma of comet P/Halley

    NASA Technical Reports Server (NTRS)

    Allen, M.; Delitsky, M.; Huntress, W.; Yung, Y.; Ip, W.-H.

    1987-01-01

    Methane and ammonia abundances in the coma of Halley are derived from Giotto ion mass spectrometer data using an Eulerian model of chemical and physical processes inside the contact surface to simulate Giotto high-intensity spectrometer ion mass spectral data for mass-to-charge ratios (m/q) from 15 to 19. The ratio m/q = 19/18 as a function of distance from the nucleus is not reproduced by a model for a pure water coma. It is necessary to include the presence of NH3, and uniquely NH3, in coma gases in order to explain the data. A ratio of production rates Q(NH3)/Q(H2O) = 0.01 = 0.02 results in model values approximating the Giotto data. Methane is identified as the most probable source of the distinct peak at m/q = 15. The observations are fit best with Q(CH4)/Q(Q2O) = 0.02. The chemical composition of the comet nucleus implied by these production rate ratios is unlike that of the outer planets. On the other hand, there are also significant differences from observations of gas phase interstellar material.

  13. Kinetic theory of dark solitons with tunable friction.

    PubMed

    Hurst, Hilary M; Efimkin, Dmitry K; Spielman, I B; Galitski, Victor

    2017-05-01

    We study controllable friction in a system consisting of a dark soliton in a one-dimensional Bose-Einstein condensate coupled to a noninteracting Fermi gas. The fermions act as impurity atoms, not part of the original condensate, that scatter off of the soliton. We study semiclassical dynamics of the dark soliton, a particlelike object with negative mass, and calculate its friction coefficient. Surprisingly, it depends periodically on the ratio of interspecies (impurity-condensate) to intraspecies (condensate-condensate) interaction strengths. By tuning this ratio, one can access a regime where the friction coefficient vanishes. We develop a general theory of stochastic dynamics for negative-mass objects and find that their dynamics are drastically different from their positive-mass counterparts: they do not undergo Brownian motion. From the exact phase-space probability distribution function (i.e., in position and velocity), we find that both the trajectory and lifetime of the soliton are altered by friction, and the soliton can undergo Brownian motion only in the presence of friction and a confining potential. These results agree qualitatively with experimental observations by Aycock et al. [Proc. Natl. Acad. Sci. USA 114 , 2503 (2017)] in a similar system with bosonic impurity scatterers.

  14. Is the Link Between the Observed Velocities of Neutron Stars and their Progenitors a Simple Mass Relationship?

    NASA Astrophysics Data System (ADS)

    Bray, J. C.

    2017-11-01

    While the imparting of velocity `kicks' to compact remnants from supernovae is widely accepted, the relationship of the `kick' to the progenitor is not. We propose the `kick' is predominantly a result of conservation of momentum between the ejected and compact remnant masses. We propose the `kick' velocity is given by v kick = α(M ejecta/M remnant)+β, where α and β are constants we wish to determine. To test this we use the BPASS v2 (Binary Population and Spectral Synthesis) code to create stellar populations from both single star and binary star evolutionary pathways. We then use our Remnant Ejecta and Progenitor Explosion Relationship (REAPER) code to apply `kicks' to neutron stars from supernovae in these models using a grid of α and β values, (from 0 to 200 km s-1 in steps of 10 km s-1), in three different `kick' orientations, (isotropic, spin-axis aligned and orthogonal to spin-axis) and weighted by three different Salpeter initial mass functions (IMF's), with slopes of -2.0, -2.35 and -2.70. We compare our synthetic 2D and 3D velocity probability distributions to the distributions provided by Hobbs et al. (1995).

  15. Kinetic theory of dark solitons with tunable friction

    PubMed Central

    Hurst, Hilary M.; Efimkin, Dmitry K.; Spielman, I. B.; Galitski, Victor

    2018-01-01

    We study controllable friction in a system consisting of a dark soliton in a one-dimensional Bose-Einstein condensate coupled to a noninteracting Fermi gas. The fermions act as impurity atoms, not part of the original condensate, that scatter off of the soliton. We study semiclassical dynamics of the dark soliton, a particlelike object with negative mass, and calculate its friction coefficient. Surprisingly, it depends periodically on the ratio of interspecies (impurity-condensate) to intraspecies (condensate-condensate) interaction strengths. By tuning this ratio, one can access a regime where the friction coefficient vanishes. We develop a general theory of stochastic dynamics for negative-mass objects and find that their dynamics are drastically different from their positive-mass counterparts: they do not undergo Brownian motion. From the exact phase-space probability distribution function (i.e., in position and velocity), we find that both the trajectory and lifetime of the soliton are altered by friction, and the soliton can undergo Brownian motion only in the presence of friction and a confining potential. These results agree qualitatively with experimental observations by Aycock et al. [Proc. Natl. Acad. Sci. USA 114, 2503 (2017)] in a similar system with bosonic impurity scatterers. PMID:29744482

  16. Generalised Extreme Value Distributions Provide a Natural Hypothesis for the Shape of Seed Mass Distributions

    PubMed Central

    2015-01-01

    Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed “for normality” but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs), a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species’ life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm. PMID:25830773

  17. Environmental quenching and galactic conformity in the galaxy cross-correlation signal

    NASA Astrophysics Data System (ADS)

    Hatfield, P. W.; Jarvis, M. J.

    2017-12-01

    It has long been known that environment has a large effect on star formation in galaxies. There are several known plausible mechanisms to remove the cool gas needed for star formation, such as strangulation, harassment and ram-pressure stripping. It is unclear which process is dominant, and over what range of stellar mass. In this paper, we find evidence for suppression of the cross-correlation function between massive galaxies and less massive star-forming galaxies, giving a measure of how less likely a galaxy is to be star forming in the vicinity of a more massive galaxy. We develop a formalism for modelling environmental quenching mechanisms within the halo occupation distribution scheme. We find that at z ∼ 2 environment is not a significant factor in determining quenching of star-forming galaxies, and that galaxies are quenched with similar probabilities when they are satellites in sub-group environments, as they are globally. However, by z ∼ 0.5 galaxies are much less likely to be star forming when in a high-density (group or low-mass cluster) environment than when not. This increased probability of being quenched does not appear to have significant radial dependence within the halo at lower redshifts, supportive of the quenching being caused by the halting of fresh inflows of pristine gas, as opposed to by tidal stripping. Furthermore, by separating the massive sample into passive and star forming, we see that this effect is further enhanced when the central galaxy is passive, a manifestation of galactic conformity.

  18. A simple model for the critical mass of a nuclear weapon

    NASA Astrophysics Data System (ADS)

    Reed, B. Cameron

    2018-07-01

    A probability-based model for estimating the critical mass of a fissile isotope is developed. The model requires introducing some concepts from nuclear physics and incorporating some approximations, but gives results correct to about a factor of two for uranium-235 and plutonium-239.

  19. Combined statistical analysis of landslide release and propagation

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay

    2016-04-01

    Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.

  20. Conceptual models governing leaching behavior and their long-term predictive capability

    USGS Publications Warehouse

    Claassen, Hans C.

    1981-01-01

    Six models that may be used to describe the interaction of radioactive waste solids with aqueous solutions are as follows:Simple linear mass transfer;Simple parabolic mass transfer;Parabolic mass transfer with the formation of a diffusion-limiting surface layer at an arbitrary time;Initial parabolic mass transfer followed by linear mass transfer at an arbitrary time;Parabolic (or linear) mass transfer and concomitant surface sorption; andParabolic (or linear) mass transfer and concomitant chemical precipitation.Some of these models lead to either illogical or unrealistic predictions when published data are extrapolated to long times. These predictions result because most data result from short-term experimentation. Probably for longer times, processes will occur that have not been observed in the shorter experiments. This hypothesis has been verified by mass-transfer data from laboratory experiments using natural volcanic glass to predict the composition of groundwater. That such rate-limiting mechanisms do occur is reassuring, although now it is not possible to deduce a single mass-transfer limiting mechanism that could control the solution concentration of all components of all waste forms being investigated. Probably the most reasonable mechanisms are surface sorption and chemical precipitation of the species of interest. Another is limiting of mass transfer by chemical precipitation on the waste form surface of a substance not containing the species of interest, that is, presence of a diffusion-limiting layer. The presence of sorption and chemical precipitation as factors limiting mass transfer has been verified in natural groundwater systems, whereas the diffusion-limiting mechanism has not been verified yet.

  1. The Integrated Medical Model: A Probabilistic Simulation Model for Predicting In-Flight Medical Risks

    NASA Technical Reports Server (NTRS)

    Keenan, Alexandra; Young, Millennia; Saile, Lynn; Boley, Lynn; Walton, Marlei; Kerstman, Eric; Shah, Ronak; Goodenow, Debra A.; Myers, Jerry G.

    2015-01-01

    The Integrated Medical Model (IMM) is a probabilistic model that uses simulation to predict mission medical risk. Given a specific mission and crew scenario, medical events are simulated using Monte Carlo methodology to provide estimates of resource utilization, probability of evacuation, probability of loss of crew, and the amount of mission time lost due to illness. Mission and crew scenarios are defined by mission length, extravehicular activity (EVA) schedule, and crew characteristics including: sex, coronary artery calcium score, contacts, dental crowns, history of abdominal surgery, and EVA eligibility. The Integrated Medical Evidence Database (iMED) houses the model inputs for one hundred medical conditions using in-flight, analog, and terrestrial medical data. Inputs include incidence, event durations, resource utilization, and crew functional impairment. Severity of conditions is addressed by defining statistical distributions on the dichotomized best and worst-case scenarios for each condition. The outcome distributions for conditions are bounded by the treatment extremes of the fully treated scenario in which all required resources are available and the untreated scenario in which no required resources are available. Upon occurrence of a simulated medical event, treatment availability is assessed, and outcomes are generated depending on the status of the affected crewmember at the time of onset, including any pre-existing functional impairments or ongoing treatment of concurrent conditions. The main IMM outcomes, including probability of evacuation and loss of crew life, time lost due to medical events, and resource utilization, are useful in informing mission planning decisions. To date, the IMM has been used to assess mission-specific risks with and without certain crewmember characteristics, to determine the impact of eliminating certain resources from the mission medical kit, and to design medical kits that maximally benefit crew health while meeting mass and volume constraints.

  2. The Integrated Medical Model: A Probabilistic Simulation Model Predicting In-Flight Medical Risks

    NASA Technical Reports Server (NTRS)

    Keenan, Alexandra; Young, Millennia; Saile, Lynn; Boley, Lynn; Walton, Marlei; Kerstman, Eric; Shah, Ronak; Goodenow, Debra A.; Myers, Jerry G., Jr.

    2015-01-01

    The Integrated Medical Model (IMM) is a probabilistic model that uses simulation to predict mission medical risk. Given a specific mission and crew scenario, medical events are simulated using Monte Carlo methodology to provide estimates of resource utilization, probability of evacuation, probability of loss of crew, and the amount of mission time lost due to illness. Mission and crew scenarios are defined by mission length, extravehicular activity (EVA) schedule, and crew characteristics including: sex, coronary artery calcium score, contacts, dental crowns, history of abdominal surgery, and EVA eligibility. The Integrated Medical Evidence Database (iMED) houses the model inputs for one hundred medical conditions using in-flight, analog, and terrestrial medical data. Inputs include incidence, event durations, resource utilization, and crew functional impairment. Severity of conditions is addressed by defining statistical distributions on the dichotomized best and worst-case scenarios for each condition. The outcome distributions for conditions are bounded by the treatment extremes of the fully treated scenario in which all required resources are available and the untreated scenario in which no required resources are available. Upon occurrence of a simulated medical event, treatment availability is assessed, and outcomes are generated depending on the status of the affected crewmember at the time of onset, including any pre-existing functional impairments or ongoing treatment of concurrent conditions. The main IMM outcomes, including probability of evacuation and loss of crew life, time lost due to medical events, and resource utilization, are useful in informing mission planning decisions. To date, the IMM has been used to assess mission-specific risks with and without certain crewmember characteristics, to determine the impact of eliminating certain resources from the mission medical kit, and to design medical kits that maximally benefit crew health while meeting mass and volume constraints.

  3. Storage and retrieval of mass spectral information

    NASA Technical Reports Server (NTRS)

    Hohn, M. E.; Humberston, M. J.; Eglinton, G.

    1977-01-01

    Computer handling of mass spectra serves two main purposes: the interpretation of the occasional, problematic mass spectrum, and the identification of the large number of spectra generated in the gas-chromatographic-mass spectrometric (GC-MS) analysis of complex natural and synthetic mixtures. Methods available fall into the three categories of library search, artificial intelligence, and learning machine. Optional procedures for coding, abbreviating and filtering a library of spectra minimize time and storage requirements. Newer techniques make increasing use of probability and information theory in accessing files of mass spectral information.

  4. A survey for low-mass stellar and substellar members of the Hyades open cluster

    NASA Astrophysics Data System (ADS)

    Melnikov, Stanislav; Eislöffel, Jochen

    2018-03-01

    Context. Unlike young open clusters (with ages < 250 Myr), the Hyades cluster (age 600 Myr) has a clear deficit of very low-mass stars (VLM) and brown dwarfs (BD). Since this open cluster has a low stellar density and covers several tens of square degrees on the sky, extended surveys are required to improve the statistics of the VLM/BD objects in the cluster. Aim. We search for new VLM stars and BD candidates in the Hyades cluster to improve the present-day cluster mass function down to substellar masses. Methods: An imaging survey of the Hyades with a completeness limit of 21.m5 in the R band and 20.m5 in the I band was carried out with the 2k × 2k CCD Schmidt camera at the 2 m Alfred Jensch Telescope in Tautenburg. We performed a photometric selection of the cluster member candidates by combining results of our survey with 2MASS JHKs photometry Results: We present a photometric and proper motion survey covering 23.4 deg2 in the Hyades cluster core region. Using optical/IR colour-magnitude diagrams, we identify 66 photometric cluster member candidates in the magnitude range 14.m7 < I < 20.m5. The proper motion measurements are based on several all-sky surveys with an epoch difference of 60-70 yr for the bright objects. The proper motions allowed us to discriminate the cluster members from field objects and resulted in 14 proper motion members of the Hyades. We rediscover Hy 6 as a proper motion member and classify it as a substellar object candidate (BD) based on the comparison of the observed colour-magnitude diagram with theoretical model isochrones. Conclusions: With our results, the mass function of the Hyades continues to be shallow below 0.15 M⊙ indicating that the Hyades have probably lost their lowest mass members by means of dynamical evolution. We conclude that the Hyades core represents the "VLM/BD desert" and that most of the substeller objects may have already left the volume of the cluster.

  5. Prospect evaluation as a function of numeracy and probability denominator.

    PubMed

    Millroth, Philip; Juslin, Peter

    2015-05-01

    This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Growth of left ventricular mass with military basic training in army recruits.

    PubMed

    Batterham, Alan M; George, Keith P; Birch, Karen M; Pennell, Dudley J; Myerson, Saul G

    2011-07-01

    Exercise-induced left ventricular hypertrophy is well documented, but whether this occurs merely in line with concomitant increases in lean body mass is unclear. Our aim was to model the extent of left ventricular hypertrophy associated with increased lean body mass attributable to an exercise training program. Cardiac and whole-body magnetic resonance imaging was performed before and after a 10-wk intensive British Army basic training program in a sample of 116 healthy Caucasian males (aged 17-28 yr). The within-subjects repeated-measures allometric relationship between lean body mass and left ventricular mass was modeled to allow the proper normalization of changes in left ventricular mass for attendant changes in lean body mass. To linearize the general allometric model (Y=aXb), data were log-transformed before analysis; the resulting effects were therefore expressed as percent changes. We quantified the probability that the true population increase in normalized left ventricular mass was greater than a predefined minimum important difference of 0.2 SD, assigning a probabilistic descriptive anchor for magnitude-based inference. The absolute increase in left ventricular mass was 4.8% (90% confidence interval=3.5%-6%), whereas lean body mass increased by 2.6% (2.1%-3.0%). The change in left ventricular mass adjusted for the change in lean body mass was 3.5% (1.9%-5.1%), equivalent to an increase of 0.25 SD (0.14-0.37). The probability that this effect size was greater than or equal to our predefined minimum important change of 0.2 SD was 0.78-likely to be important. After correction for allometric growth rates, left ventricular hypertrophy and lean body mass changes do not occur at the same magnitude in response to chronic exercise.

  7. Precise Determination of the Intensity of 226Ra Alpha Decay to the 186 keV Excited State

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.P. LaMont; R.J. Gehrke; S.E. Glover

    There is a significant discrepancy in the reported values for the emission probability of the 186 keV gamma-ray resulting from the alpha decay of 226 Ra to 186 keV excited state of 222 Rn. Published values fall in the range of 3.28 to 3.59 gamma-rays per 100 alpha-decays. An interesting observation is that the lower value, 3.28, is based on measuring the 186 keV gamma-ray intensity relative to the 226 Ra alpha-branch to the 186 keV level. The higher values, which are close to 3.59, are based on measuring the gamma-ray intensity from mass standards of 226 Ra that aremore » traceable to the mass standards prepared by HÓNIGSCHMID in the early 1930''s. This discrepancy was resolved in this work by carefully measuring the 226 Ra alpha-branch intensities, then applying the theoretical E2 multipolarity internal conversion coefficient of 0.692±0.007 to calculate the 186 keV gamma-ray emission probability. The measured value for the alpha branch to the 186 keV excited state was (6.16±0.03)%, which gives a 186 keV gamma-ray emission probability of (3.64±0.04)%. This value is in excellent agreement with the most recently reported 186 keV gamma-ray emission probabilities determined using 226 Ra mass standards.« less

  8. ON THE EVOLUTIONARY AND PULSATION MASS OF CLASSICAL CEPHEIDS. III. THE CASE OF THE ECLIPSING BINARY CEPHEID CEP0227 IN THE LARGE MAGELLANIC CLOUD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prada Moroni, P. G.; Gennaro, M.; Bono, G.

    2012-04-20

    We present a new Bayesian approach to constrain the intrinsic parameters (stellar mass and age) of the eclipsing binary system-CEP0227-in the Large Magellanic Cloud (LMC). We computed several sets of evolutionary models covering a broad range in chemical compositions and in stellar mass. Independent sets of models were also constructed either by neglecting or by including a moderate convective core overshooting ({beta}{sub ov} = 0.2) during central hydrogen-burning phases. Sets of models were also constructed either by neglecting or by assuming a canonical ({eta} = 0.4, 0.8) or an enhanced ({eta} = 4) mass-loss rate. The most probable solutions weremore » computed in three different planes: luminosity-temperature, mass-radius, and gravity-temperature. By using the Bayes factor, we found that the most probable solutions were obtained in the gravity-temperature plane with a Gaussian mass prior distribution. The evolutionary models constructed by assuming a moderate convective core overshooting ({beta}{sub ov} = 0.2) and a canonical mass-loss rate ({eta} = 0.4) give stellar masses for the primary (Cepheid)-M = 4.14{sup +0.04}{sub -0.05} M{sub Sun }-and for the secondary-M = 4.15{sup +0.04}{sub -0.05} M{sub Sun }-that agree at the 1% level with dynamical measurements. Moreover, we found ages for the two components and for the combined system-t = 151{sup +4}{sub -3} Myr-that agree at the 5% level. The solutions based on evolutionary models that neglect the mass loss attain similar parameters, while those ones based on models that either account for an enhanced mass loss or neglect convective core overshooting have lower Bayes factors and larger confidence intervals. The dependence on the mass-loss rate might be the consequence of the crude approximation we use to mimic this phenomenon. By using the isochrone of the most probable solution and a Gaussian prior on the LMC distance, we found a true distance modulus-18.53{sup +0.02}{sub -0.02} mag-and a reddening value-E(B - V) = 0.142{sup +0.005}{sub -0.010} mag-that agree quite well with similar estimates in the literature.« less

  9. Low skeletal muscle mass index is associated with function and nutritional status in residents in a Turkish nursing home.

    PubMed

    Tufan, Asli; Bahat, Gulistan; Ozkaya, Hilal; Taşcıoğlu, Didem; Tufan, Fatih; Saka, Bülent; Akin, Sibel; Karan, Mehmet Akif

    2016-09-01

    To determine the prevalence of low muscle mass (LMM) and the relationship between LMM with functional and nutritional status as defined using the LMM evaluation method of European Working Group on Sarcopenia in Older People (EWGSOP) criteria among male residents in a nursing home. Male residents aged >60 years of a nursing home located in Turkey were included in our study. Their body mass index (BMI) kg/m 2 , skeletal muscle mass (SMM-kg) and skeletal muscle mass index (SMMI-kg/m 2 ) were calculated. The participants were regarded as having low SMMI if they had SMMI <9.2 kg/m 2 according to our population specific cut-off point. Functional status was evaluated with Katz activities of daily living (ADL) and Lawton Instrumental Activities of Daily Living (IADL). Nutritional assessment was performed using the Mini Nutritional Assessment (MNA). The number of drugs taken and chronic diseases were recorded. One hundred fifty-seven male residents were enrolled into the study. Their mean age was 73.1 ± 6.7 years with mean ADL score of 8.9 ± 2.0 and IADL score of 8.7 ± 4.6. One hundred twelve (71%) residents were aged >70 years. Thirty-five men (23%) had low SMMI in group aged >60 years, and twenty-eight subjects (25%) in the group aged >70 years. MNA scores were significantly lower in residents with low SMMI compared with having normal SMMI (17.1 ± 3.4 versus 19.6 ± 2.5, p = 0.005). BMI was significantly lower in the residents with low SMMI compared with normal SMMI (19.6 ± 2.7 versus 27.1 ± 4.1, p< 0.001). ADL scores were significantly different between residents with low SMMI and normal SMMI in those aged >70 years (8.1 ± 2.6 versus 9.1 ± 1.6, p = 0.014). In regression analyses, the only factor associated with better functional status was the lower age (p = 0.04) while the only factor associated with better nutrition was higher SMMI (p = 0.01). Low SMMI detected by LMM evaluation method of EWGSOP criteria is prevalent among male nursing home residents. There is association of low SMMI with nutritional status and probably with functional status within the nursing home setting using the EWGSOP criteria with Turkish normative reference cut-off value.

  10. Game-Theoretic strategies for systems of components using product-form utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.

    Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less

  11. Ionization compression impact on dense gas distribution and star formation. Probability density functions around H II regions as seen by Herschel

    NASA Astrophysics Data System (ADS)

    Tremblin, P.; Schneider, N.; Minier, V.; Didelon, P.; Hill, T.; Anderson, L. D.; Motte, F.; Zavagno, A.; André, Ph.; Arzoumanian, D.; Audit, E.; Benedettini, M.; Bontemps, S.; Csengeri, T.; Di Francesco, J.; Giannini, T.; Hennemann, M.; Nguyen Luong, Q.; Marston, A. P.; Peretto, N.; Rivera-Ingraham, A.; Russeil, D.; Rygl, K. L. J.; Spinoglio, L.; White, G. J.

    2014-04-01

    Aims: Ionization feedback should impact the probability distribution function (PDF) of the column density of cold dust around the ionized gas. We aim to quantify this effect and discuss its potential link to the core and initial mass function (CMF/IMF). Methods: We used Herschel column density maps of several regions observed within the HOBYS key program in a systematic way: M 16, the Rosette and Vela C molecular clouds, and the RCW 120 H ii region. We computed the PDFs in concentric disks around the main ionizing sources, determined their properties, and discuss the effect of ionization pressure on the distribution of the column density. Results: We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a "double-peak" or an enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas, while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. Such a double peak is not visible for all clouds associated with ionization fronts, but it depends on the relative importance of ionization pressure and turbulent ram pressure. A power-law tail is present for higher column densities, which are generally ascribed to the effect of gravity. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion that is able to disentangle triggered star formation from pre-existing star formation. Conclusions: In the context of the gravo-turbulent scenario for the origin of the CMF/IMF, the double-peaked or enlarged shape of the PDF may affect the formation of objects at both the low-mass and the high-mass ends of the CMF/IMF. In particular, a broader PDF is required by the gravo-turbulent scenario to fit the IMF properly with a reasonable initial Mach number for the molecular cloud. Since other physical processes (e.g., the equation of state and the variations among the core properties) have already been said to broaden the PDF, the relative importance of the different effects remains an open question. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  12. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  13. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  14. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  15. Performance of concatenated Reed-Solomon/Viterbi channel coding

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1982-01-01

    The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.

  16. Odor detection of mixtures of homologous carboxylic acids and coffee aroma compounds by humans.

    PubMed

    Miyazawa, Toshio; Gallagher, Michele; Preti, George; Wise, Paul M

    2009-11-11

    Mixture summation among homologous carboxylic acids, that is, the relationship between detection probabilities for mixtures and detection probabilities for their unmixed components, varies with similarity in carbon-chain length. The current study examined detection of acetic, butyric, hexanoic, and octanoic acids mixed with three other model odorants that differ greatly from the acids in both structure and odor character, namely, 2-hydroxy-3-methylcyclopent-2-en-1-one, furan-2-ylmethanethiol, and (3-methyl-3-sulfanylbutyl) acetate. Psychometric functions were measured for both single compounds and binary mixtures (2 of 5, forced-choice method). An air dilution olfactometer delivered stimuli, with vapor-phase calibration using gas chromatography-mass spectrometry. Across the three odorants that differed from the acids, acetic and butyric acid showed approximately additive (or perhaps even supra-additive) summation at low perithreshold concentrations, but subadditive interactions at high perithreshold concentrations. In contrast, the medium-chain acids showed subadditive interactions across a wide range of concentrations. Thus, carbon-chain length appears to influence not only summation with other carboxylic acids but also summation with at least some unrelated compounds.

  17. A kinetic theory for age-structured stochastic birth-death processes

    NASA Astrophysics Data System (ADS)

    Chou, Tom; Greenman, Chris

    Classical age-structured mass-action models such as the McKendrick-von Foerster equation have been extensively studied but they are structurally unable to describe stochastic fluctuations or population-size-dependent birth and death rates. Conversely, current theories that include size-dependent population dynamics (e.g., carrying capacity) cannot be easily extended to take into account age-dependent birth and death rates. In this paper, we present a systematic derivation of a new fully stochastic kinetic theory for interacting age-structured populations. By defining multiparticle probability density functions, we derive a hierarchy of kinetic equations for the stochastic evolution of an aging population undergoing birth and death. We show that the fully stochastic age-dependent birth-death process precludes factorization of the corresponding probability densities, which then must be solved by using a BBGKY-like hierarchy. Our results generalize both deterministic models and existing master equation approaches by providing an intuitive and efficient way to simultaneously model age- and population-dependent stochastic dynamics applicable to the study of demography, stem cell dynamics, and disease evolution. NSF.

  18. Percolation analysis for cosmic web with discrete points

    NASA Astrophysics Data System (ADS)

    Zhang, Jiajun; Cheng, Dalong; Chu, Ming-Chung

    2016-03-01

    Percolation analysis has long been used to quantify the connectivity of the cosmic web. Unlike most of the previous works using density field on grids, we have studied percolation analysis based on discrete points. Using a Friends-of-Friends (FoF) algorithm, we generate the S-bb relation, between the fractional mass of the largest connected group (S) and the FoF linking length (bb). We propose a new model, the Probability Cloud Cluster Expansion Theory (PCCET) to relate the S-bb relation with correlation functions. We show that the S-bb relation reflects a combination of all orders of correlation functions. We have studied the S-bb relation with simulation and find that the S-bb relation is robust against redshift distortion and incompleteness in observation. From the Bolshoi simulation, with Halo Abundance Matching (HAM), we have generated a mock galaxy catalogue. Good matching of the projected two-point correlation function with observation is confirmed. However, comparing the mock catalogue with the latest galaxy catalogue from SDSS DR12, we have found significant differences in their S-bb relations. This indicates that the mock catalogue cannot accurately recover higher order correlation functions than the two-point correlation function, which reveals the limit of HAM method.

  19. A RANS simulation toward the effect of turbulence and cavitation on spray propagation and combustion characteristics

    NASA Astrophysics Data System (ADS)

    Taghavifar, Hadi; Khalilarya, Shahram; Jafarmadar, Samad; Taghavifar, Hamid

    2016-08-01

    A multidimensional computational fluid dynamic code was developed and integrated with probability density function combustion model to give the detailed account of multiphase fluid flow. The vapor phase within injector domain is treated with Reynolds-averaged Navier-Stokes technique. A new parameter is proposed which is an index of plane-cut spray propagation and takes into account two parameters of spray penetration length and cone angle at the same time. It was found that spray propagation factor (SPI) tends to increase at lower r/ d ratios, although the spray penetration tends to decrease. The results of SPI obtained by empirical correlation of Hay and Jones were compared with the simulation computation as a function of respective r/ d ratio. Based on the results of this study, the spray distribution on plane area has proportional correlation with heat release amount, NO x emission mass fraction, and soot concentration reduction. Higher cavitation is attributed to the sharp edge of nozzle entrance, yielding better liquid jet disintegration and smaller spray droplet that reduces soot mass fraction of late combustion process. In order to have better insight of cavitation phenomenon, turbulence magnitude in nozzle and combustion chamber was acquired and depicted along with spray velocity.

  20. Massive shelf dense water flow influences plankton community structure and particle transport over long distance.

    PubMed

    Bernardi Aubry, Fabrizio; Falcieri, Francesco Marcello; Chiggiato, Jacopo; Boldrin, Alfredo; Luna, Gian Marco; Finotto, Stefania; Camatti, Elisa; Acri, Francesco; Sclavo, Mauro; Carniel, Sandro; Bongiorni, Lucia

    2018-03-14

    Dense waters (DW) formation in shelf areas and their cascading off the shelf break play a major role in ventilating deep waters, thus potentially affecting ecosystem functioning and biogeochemical cycles. However, whether DW flow across shelves may affect the composition and structure of plankton communities down to the seafloor and the particles transport over long distances has not been fully investigated. Following the 2012 north Adriatic Sea cold outbreak, DW masses were intercepted at ca. 460 km south the area of origin and compared to resident ones in term of plankton biomass partitioning (pico to micro size) and phytoplankton species composition. Results indicated a relatively higher contribution of heterotrophs in DW than in deep resident water masses, probably as result of DW-mediated advection of fresh organic matter available to consumers. DWs showed unusual high abundances of Skeletonema sp., a diatom that bloomed in the north Adriatic during DW formation. The Lagrangian numerical model set up on this diatom confirmed that DW flow could be an important mechanism for plankton/particles export to deep waters. We conclude that the predicted climate-induced variability in DW formation events could have the potential to affect the ecosystem functioning of the deeper part of the Mediterranean basin, even at significant distance from generation sites.

  1. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Horn, J. E.; Harter, T.

    2011-06-01

    Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.

  2. On defense strategies for system of systems using aggregated correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.

    2017-04-01

    We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less

  3. Detecting background changes in environments with dynamic foreground by separating probability distribution function mixtures using Pearson's method of moments

    NASA Astrophysics Data System (ADS)

    Jenkins, Colleen; Jordan, Jay; Carlson, Jeff

    2007-02-01

    This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.

  4. Stationary swarming motion of active Brownian particles in parabolic external potential

    NASA Astrophysics Data System (ADS)

    Zhu, Wei Qiu; Deng, Mao Lin

    2005-08-01

    We investigate the stationary swarming motion of active Brownian particles in parabolic external potential and coupled to its mass center. Using Monte Carlo simulation we first show that the mass center approaches to rest after a sufficient long period of time. Thus, all the particles of a swarm have identical stationary motion relative to the mass center. Then the stationary probability density obtained by using the stochastic averaging method for quasi integrable Hamiltonian systems in our previous paper for the motion in 4-dimensional phase space of single active Brownian particle with Rayleigh friction model in parabolic potential is used to describe the relative stationary motion of each particle of the swarm and to obtain more probability densities including that for the total energy of the swarm. The analytical results are confirmed by comparing with those from simulation and also shown to be consistent with the existing deterministic exact steady-state solution.

  5. Fusion-fission Study at JAEA for Heavy-element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.

    Fission fragment mass distributions were measured in the heavy-ion induced fission using 238U target nucleus. The mass distribu- tions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their inci- dent energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si+238U and 34S+238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections of 263,264Sg and 267,268Hs, produced by 30Si+238U and 34S+238U, respectively. It is also suggested that the sub-barrier energies can be used for heavy element synthesis.

  6. In-beam fissio study at JAEA for heavy element synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.; Ikezoe, H.; Hofmann, S.; Ackermann, D.; Aritomo, Y.; Comas, V. F.; Düllmann, Ch. E.; Heinz, S.; Heredia, J. A.; Heßberger, F. P.; Hirose, K.; Khuyagbaatar, J.; Kindler, B.; Kojouharov, I.; Lommel, B.; Makii, M.; Mann, R.; Mitsuoka, S.; Nishinaka, I.; Ohtsuki, T.; Saro, S.; Schädel, M.; Popeko, A. G.; Türler, A.; Wakabayashi, Y.; Watanabe, Y.; Yakushev, A.; Yeremin, A.

    2013-04-01

    Fission fragment mass distributions were measured in the heavy-ion induced fission using 238U target nucleus. The mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si+238U and 34S+238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections of 263,264Sg and 267,268Hs, produced by 30Si+238U and 34S+238U, respectively. It is also suggested that the sub-barrier energies can be used for heavy element synthesis.

  7. Mass loss from interacting close binary systems

    NASA Technical Reports Server (NTRS)

    Plavec, M. J.

    1981-01-01

    The three well-defined classes of evolved binary systems that show evidence of present and/or past mass loss are the cataclysmic variables, the Algols, and Wolf-Rayet stars. It is thought that the transformation of supergiant binary systems into the very short-period cataclysmic variables must have been a complex process. The new evidence that has recently been obtained from the far ultraviolet spectra that a certain subclass of the Algols (the Serpentids) are undergoing fairly rapid evolution is discussed. It is thought probable that the remarkable mass outflow observed in them is connected with a strong wind powered by accretion. The origin of the circumbinary clouds or flat disks that probably surround many strongly interacting binaries is not clear. Attention is also given to binary systems with hot white dwarf or subdwarf components, such as the symbiotic objects and the BQ stars; it is noted that in them both components may be prone to an enhanced stellar wind.

  8. Thick Disks in the Hubble Space Telescope Frontier Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmegreen, Bruce G.; Elmegreen, Debra Meloy; Tompkins, Brittany

    Thick disk evolution is studied using edge-on galaxies in two Hubble Space Telescope Frontier Field Parallels. The galaxies were separated into 72 clumpy types and 35 spiral types with bulges. Perpendicular light profiles in F435W, F606W, and F814W ( B , V , and I ) passbands were measured at 1 pixel intervals along the major axes and fitted to sech{sup 2} functions convolved with the instrument line spread function (LSF). The LSF was determined from the average point spread function of ∼20 stars in each passband and field, convolved with a line of uniform brightness to simulate disk blurring.more » A spread function for a clumpy disk was also used for comparison. The resulting scale heights were found to be proportional to galactic mass, with the average height for a 10{sup 10±0.5} M {sub ⊙} galaxy at z = 2 ± 0.5 equal to 0.63 ± 0.24 kpc. This value is probably the result of a blend between thin and thick disk components that cannot be resolved. Evidence for such two-component structure is present in an inverse correlation between height and midplane surface brightness. Models suggest that the thick disk is observed best between the clumps, and there the average scale height is 1.06 ± 0.43 kpc for the same mass and redshift. A 0.63 ± 0.68 mag V − I color differential with height is also evidence for a mixture of thin and thick components.« less

  9. STAR FORMATION IN TURBULENT MOLECULAR CLOUDS WITH COLLIDING FLOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsumoto, Tomoaki; Dobashi, Kazuhito; Shimoikura, Tomomi, E-mail: matsu@hosei.ac.jp

    2015-03-10

    Using self-gravitational hydrodynamical numerical simulations, we investigated the evolution of high-density turbulent molecular clouds swept by a colliding flow. The interaction of shock waves due to turbulence produces networks of thin filamentary clouds with a sub-parsec width. The colliding flow accumulates the filamentary clouds into a sheet cloud and promotes active star formation for initially high-density clouds. Clouds with a colliding flow exhibit a finer filamentary network than clouds without a colliding flow. The probability distribution functions (PDFs) for the density and column density can be fitted by lognormal functions for clouds without colliding flow. When the initial turbulence ismore » weak, the column density PDF has a power-law wing at high column densities. The colliding flow considerably deforms the PDF, such that the PDF exhibits a double peak. The stellar mass distributions reproduced here are consistent with the classical initial mass function with a power-law index of –1.35 when the initial clouds have a high density. The distribution of stellar velocities agrees with the gas velocity distribution, which can be fitted by Gaussian functions for clouds without colliding flow. For clouds with colliding flow, the velocity dispersion of gas tends to be larger than the stellar velocity dispersion. The signatures of colliding flows and turbulence appear in channel maps reconstructed from the simulation data. Clouds without colliding flow exhibit a cloud-scale velocity shear due to the turbulence. In contrast, clouds with colliding flow show a prominent anti-correlated distribution of thin filaments between the different velocity channels, suggesting collisions between the filamentary clouds.« less

  10. Journal: A Review of Some Tracer-Test Design Equations for ...

    EPA Pesticide Factsheets

    Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-

  11. Exposure to mass media health information, skin cancer beliefs, and sun protection behaviors in a United States probability sample.

    PubMed

    Hay, Jennifer; Coups, Elliot J; Ford, Jennifer; DiBonaventura, Marco

    2009-11-01

    The mass media is increasingly important in shaping a range of health beliefs and behaviors. We examined the association among mass media health information exposure (general health, cancer, sun protection information), skin cancer beliefs, and sun protection behaviors. We used a general population national probability sample comprised of 1633 individuals with no skin cancer history (Health Information National Trends Survey, 2005, National Cancer Institute) and examined univariate and multivariate associations among family history of skin cancer, mass media exposure, skin cancer beliefs, and sun protection (use of sunscreen, shade seeking, and use of sun-protective clothing). Mass media exposure was higher in younger individuals, and among those who were white and more highly educated. More accurate skin cancer beliefs and more adherent sun protection practices were reported by older individuals, and among those who were white and more highly educated. Recent Internet searches for health or sun protection information were associated with sunscreen use. Study limitations include the self-report nature of sun protection behaviors and cross-sectional study design. We identify demographic differences in mass media health exposure, skin cancer beliefs, and sun protection behaviors that will contribute to planning skin cancer awareness and prevention messaging across diverse population subgroups.

  12. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  13. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  14. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  15. Exact calculation of loop formation probability identifies folding motifs in RNA secondary structures

    PubMed Central

    Sloma, Michael F.; Mathews, David H.

    2016-01-01

    RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924

  16. THE GALACTIC CENTER CLOUD G2-A YOUNG LOW-MASS STAR WITH A STELLAR WIND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scoville, N.; Burkert, A.

    2013-05-10

    We explore the possibility that the G2 gas cloud falling in toward SgrA* is the mass-loss envelope of a young T Tauri star. As the star plunges to smaller radius at 1000-6000 km s{sup -1}, a strong bow shock forms where the stellar wind is impacted by the hot X-ray emitting gas in the vicinity of SgrA*. For a stellar mass-loss rate of 4 Multiplication-Sign 10{sup -8} M{sub Sun} yr{sup -1} and wind velocity 100 km s{sup -1}, the bow shock will have an emission measure (EM = n {sup 2} vol) at a distance {approx}10{sup 16} cm, similar tomore » that inferred from the IR emission lines. The ionization of the dense bow shock gas is potentially provided by collisional ionization at the shock front and cooling radiation (X-ray and UV) from the post shock gas. The former would predict a constant line flux as a function of distance from SgrA*, while the latter will have increasing emission at lesser distances. In this model, the star and its mass-loss wind should survive pericenter passage since the wind is likely launched at 0.2 AU and this is much less than the Roche radius at pericenter ({approx}3 AU for a stellar mass of 2 M{sub Sun }). In this model, the emission cloud will probably survive pericenter passage, discriminating this scenario from others.« less

  17. Characteristics of vertical air motion in isolated convective clouds

    DOE PAGES

    Yang, Jing; Wang, Zhien; Heymsfield, Andrew J.; ...

    2016-08-11

    The vertical velocity and air mass flux in isolated convective clouds are statistically analyzed using aircraft in situ data collected from three field campaigns: High-Plains Cumulus (HiCu) conducted over the midlatitude High Plains, COnvective Precipitation Experiment (COPE) conducted in a midlatitude coastal area, and Ice in Clouds Experiment-Tropical (ICE-T) conducted over a tropical ocean. The results show that small-scale updrafts and downdrafts (<  500 m in diameter) are frequently observed in the three field campaigns, and they make important contributions to the total air mass flux. The probability density functions (PDFs) and profiles of the observed vertical velocity are provided. The PDFsmore » are exponentially distributed. The updrafts generally strengthen with height. Relatively strong updrafts (>  20 m s −1) were sampled in COPE and ICE-T. The observed downdrafts are stronger in HiCu and COPE than in ICE-T. The PDFs of the air mass flux are exponentially distributed as well. The observed maximum air mass flux in updrafts is of the order 10 4 kg m −1 s −1. The observed air mass flux in the downdrafts is typically a few times smaller in magnitude than that in the updrafts. Since this study only deals with isolated convective clouds, and there are many limitations and sampling issues in aircraft in situ measurements, more observations are needed to better explore the vertical air motion in convective clouds.« less

  18. Effects of Lewis number on the statistics of the invariants of the velocity gradient tensor and local flow topologies in turbulent premixed flames

    NASA Astrophysics Data System (ADS)

    Wacks, Daniel; Konstantinou, Ilias; Chakraborty, Nilanjan

    2018-04-01

    The behaviours of the three invariants of the velocity gradient tensor and the resultant local flow topologies in turbulent premixed flames have been analysed using three-dimensional direct numerical simulation data for different values of the characteristic Lewis number ranging from 0.34 to 1.2. The results have been analysed to reveal the statistical behaviours of the invariants and the flow topologies conditional upon the reaction progress variable. The behaviours of the invariants have been explained in terms of the relative strengths of the thermal and mass diffusions, embodied by the influence of the Lewis number on turbulent premixed combustion. Similarly, the behaviours of the flow topologies have been explained in terms not only of the Lewis number but also of the likelihood of the occurrence of individual flow topologies in the different flame regions. Furthermore, the sensitivity of the joint probability density function of the second and third invariants and the joint probability density functions of the mean and Gaussian curvatures to the variation in Lewis number have similarly been examined. Finally, the dependences of the scalar-turbulence interaction term on augmented heat release and of the vortex-stretching term on flame-induced turbulence have been explained in terms of the Lewis number, flow topology and reaction progress variable.

  19. Effects of Lewis number on the statistics of the invariants of the velocity gradient tensor and local flow topologies in turbulent premixed flames

    PubMed Central

    Konstantinou, Ilias; Chakraborty, Nilanjan

    2018-01-01

    The behaviours of the three invariants of the velocity gradient tensor and the resultant local flow topologies in turbulent premixed flames have been analysed using three-dimensional direct numerical simulation data for different values of the characteristic Lewis number ranging from 0.34 to 1.2. The results have been analysed to reveal the statistical behaviours of the invariants and the flow topologies conditional upon the reaction progress variable. The behaviours of the invariants have been explained in terms of the relative strengths of the thermal and mass diffusions, embodied by the influence of the Lewis number on turbulent premixed combustion. Similarly, the behaviours of the flow topologies have been explained in terms not only of the Lewis number but also of the likelihood of the occurrence of individual flow topologies in the different flame regions. Furthermore, the sensitivity of the joint probability density function of the second and third invariants and the joint probability density functions of the mean and Gaussian curvatures to the variation in Lewis number have similarly been examined. Finally, the dependences of the scalar--turbulence interaction term on augmented heat release and of the vortex-stretching term on flame-induced turbulence have been explained in terms of the Lewis number, flow topology and reaction progress variable. PMID:29740257

  20. Chinese lacto-vegetarian diet exerts favorable effects on metabolic parameters, intima-media thickness, and cardiovascular risks in healthy men.

    PubMed

    Yang, Shu-Yu; Li, Xue-Jun; Zhang, Wei; Liu, Chang-Qin; Zhang, Hui-Jie; Lin, Jin-Rong; Yan, Bing; Yu, Ya-Xin; Shi, Xiu-Lin; Li, Can-Dong; Li, Wei-Hua

    2012-06-01

    To investigate whether the Chinese lacto-vegetarian diet has protective effects on metabolic and cardiovascular disease (CVD). One hundred sixty-nine healthy Chinese lacto-vegetarians and 126 healthy omnivore men aged 21-76 years were enrolled. Anthropometric indexes, lipid profile, insulin sensitivity, pancreatic β cell function, and intima-media thickness (IMT) of carotid arteries were assessed and compared. Cardiovascular risk points and probability of developing CVD in 5-10 years in participants aged 24-55 years were calculated. Compared with omnivores, lacto-vegetarians had remarkably lower body mass index, systolic and diastolic blood pressure, and serum levels of triglyceride, total cholesterol, low-density lipoprotein cholesterol, apolipoprotein B, γ-glutamyl transferase, serum creatinine, uric acid, fasting blood glucose, as well as lower total cholesterol/high-density lipoprotein cholesterol ratio. Vegetarians also had higher homeostasis model assessment β cell function and insulin secretion index and thinner carotid IMT than the omnivores did. These results corresponded with lower cardiovascular risk points and probability of developing CVD in 5-10 years in vegetarians 24-55 years old. In healthy Chinese men, the lacto-vegetarian diet seems to exert protective effects on blood pressure, lipid profiles, and metabolic parameters and results in significantly lower carotid IMT. Lower CVD risks found in vegetarians also reflect the beneficial effect of the Chinese lacto-vegetarian diet.

  1. The Impact of Nuclear Reaction Rate Uncertainties on the Evolution of Core-collapse Supernova Progenitors

    NASA Astrophysics Data System (ADS)

    Fields, C. E.; Timmes, F. X.; Farmer, R.; Petermann, I.; Wolf, William M.; Couch, S. M.

    2018-02-01

    We explore properties of core-collapse supernova progenitors with respect to the composite uncertainties in the thermonuclear reaction rates by coupling the probability density functions of the reaction rates provided by the STARLIB reaction rate library with MESA stellar models. We evolve 1000 models of 15{M}ȯ from the pre-main sequence to core O-depletion at solar and subsolar metallicities for a total of 2000 Monte Carlo stellar models. For each stellar model, we independently and simultaneously sample 665 thermonuclear reaction rates and use them in a MESA in situ reaction network that follows 127 isotopes from 1H to 64Zn. With this framework we survey the core mass, burning lifetime, composition, and structural properties at five different evolutionary epochs. At each epoch we measure the probability distribution function of the variations of each property and calculate Spearman rank-order correlation coefficients for each sampled reaction rate to identify which reaction rate has the largest impact on the variations on each property. We find that uncertainties in the reaction rates of {}14{{N}}{({{p}},γ )}15{{O}}, triple-α, {}12{{C}}{(α ,γ )}16{{O}}, 12C(12C,p)23Na, 12C(16O, p)27Al, 16O(16O,n)31S, 16O(16O, p)31P, and 16O(16O,α)28Si dominate the variations of the properties surveyed. We find that variations induced by uncertainties in nuclear reaction rates grow with each passing phase of evolution, and at core H-, He-depletion they are of comparable magnitude to the variations induced by choices of mass resolution and network resolution. However, at core C-, Ne-, and O-depletion, the reaction rate uncertainties can dominate the variation, causing uncertainty in various properties of the stellar model in the evolution toward iron core-collapse.

  2. A faint type of supernova from a white dwarf with a helium-rich companion.

    PubMed

    Perets, H B; Gal-Yam, A; Mazzali, P A; Arnett, D; Kagan, D; Filippenko, A V; Li, W; Arcavi, I; Cenko, S B; Fox, D B; Leonard, D C; Moon, D-S; Sand, D J; Soderberg, A M; Anderson, J P; James, P A; Foley, R J; Ganeshalingam, M; Ofek, E O; Bildsten, L; Nelemans, G; Shen, K J; Weinberg, N N; Metzger, B D; Piro, A L; Quataert, E; Kiewe, M; Poznanski, D

    2010-05-20

    Supernovae are thought to arise from two different physical processes. The cores of massive, short-lived stars undergo gravitational core collapse and typically eject a few solar masses during their explosion. These are thought to appear as type Ib/c and type II supernovae, and are associated with young stellar populations. In contrast, the thermonuclear detonation of a carbon-oxygen white dwarf, whose mass approaches the Chandrasekhar limit, is thought to produce type Ia supernovae. Such supernovae are observed in both young and old stellar environments. Here we report a faint type Ib supernova, SN 2005E, in the halo of the nearby isolated galaxy, NGC 1032. The 'old' environment near the supernova location, and the very low derived ejected mass ( approximately 0.3 solar masses), argue strongly against a core-collapse origin. Spectroscopic observations and analysis reveal high ejecta velocities, dominated by helium-burning products, probably excluding this as a subluminous or a regular type Ia supernova. We conclude that it arises from a low-mass, old progenitor, likely to have been a helium-accreting white dwarf in a binary. The ejecta contain more calcium than observed in other types of supernovae and probably large amounts of radioactive (44)Ti.

  3. zCOSMOS - 10k-bright spectroscopic sample. The bimodality in the galaxy stellar mass function: exploring its evolution with redshift

    NASA Astrophysics Data System (ADS)

    Pozzetti, L.; Bolzonella, M.; Zucca, E.; Zamorani, G.; Lilly, S.; Renzini, A.; Moresco, M.; Mignoli, M.; Cassata, P.; Tasca, L.; Lamareille, F.; Maier, C.; Meneux, B.; Halliday, C.; Oesch, P.; Vergani, D.; Caputi, K.; Kovač, K.; Cimatti, A.; Cucciati, O.; Iovino, A.; Peng, Y.; Carollo, M.; Contini, T.; Kneib, J.-P.; Le Févre, O.; Mainieri, V.; Scodeggio, M.; Bardelli, S.; Bongiorno, A.; Coppa, G.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Kampczyk, P.; Knobel, C.; Le Borgne, J.-F.; Le Brun, V.; Pellò, R.; Perez Montero, E.; Ricciardelli, E.; Silverman, J. D.; Tanaka, M.; Tresse, L.; Abbas, U.; Bottini, D.; Cappi, A.; Guzzo, L.; Koekemoer, A. M.; Leauthaud, A.; Maccagni, D.; Marinoni, C.; McCracken, H. J.; Memeo, P.; Porciani, C.; Scaramella, R.; Scarlata, C.; Scoville, N.

    2010-11-01

    We present the galaxy stellar mass function (GSMF) to redshift z ≃ 1, based on the analysis of about 8500 galaxies with I < 22.5 (AB mag) over 1.4 deg2, which are part of the zCOSMOS-bright 10k spectroscopic sample. We investigate the total GSMF, as well as the contributions of early- and late-type galaxies (ETGs and LTGs, respectively), defined by different criteria (broad-band spectral energy distribution, morphology, spectral properties, or star formation activities). We unveil a galaxy bimodality in the global GSMF, whose shape is more accurately represented by 2 Schechter functions, one linked to the ETG and the other to the LTG populations. For the global population, we confirm a mass-dependent evolution (“mass-assembly downsizing”), i.e., galaxy number density increases with cosmic time by a factor of two between z = 1 and z = 0 for intermediate-to-low mass (log (ℳ/ℳ⊙) ~ 10.5) galaxies but less than 15% for log(ℳ/ℳ⊙) > 11. We find that the GSMF evolution at intermediate-to-low values of ℳ (log (ℳ/ℳ⊙) < 10.6) is mostly explained by the growth in stellar mass driven by smoothly decreasing star formation activities, despite the redder colours predicted in particular at low redshift. The low residual evolution is consistent, on average, with ~0.16 merger per galaxy per Gyr (of which fewer than 0.1 are major), with a hint of a decrease with cosmic time but not a clear dependence on the mass. From the analysis of different galaxy types, we find that ETGs, regardless of the classification method, increase in number density with cosmic time more rapidly with decreasing M, i.e., follow a top-down building history, with a median “building redshift” increasing with mass (z > 1 for log(ℳ/ℳ⊙) > 11), in contrast to hierarchical model predictions. For LTGs, we find that the number density of blue or spiral galaxies with log(ℳ/ℳ⊙) > 10 remains almost constant with cosmic time from z ~ 1. Instead, the most extreme population of star-forming galaxies (with high specific star formation), at intermediate/high-mass, rapidly decreases in number density with cosmic time. Our data can be interpreted as a combination of different effects. Firstly, we suggest a transformation, driven mainly by SFH, from blue, active, spiral galaxies of intermediate mass to blue quiescent and subsequently (1-2 Gyr after) red, passive types of low specific star formation. We find an indication that the complete morphological transformation, probably driven by dynamical processes, into red spheroidal galaxies, occurred on longer timescales or followed after 1-2 Gyr. A continuous replacement of blue galaxies is expected to be accomplished by low-mass active spirals increasing their stellar mass. We estimate the growth rate in number and mass density of the red galaxies at different redshifts and masses. The corresponding fraction of blue galaxies that, at any given time, is transforming into red galaxies per Gyr, due to the quenching of their SFR, is on average ~25% for log(ℳ/ℳ⊙) < 11. We conclude that the build-up of galaxies and in particular of ETGs follows the same downsizing trend with mass (i.e. occurs earlier for high-mass galaxies) as the formation of their stars and follows the converse of the trend predicted by current SAMs. In this scenario, we expect there to be a negligible evolution of the galaxy baryonic mass function (GBMF) for the global population at all masses and a decrease with cosmic time in the GBMF for the blue galaxy population at intermediate-high masses. Based on data obtained with the European Southern Observatory Very Large Telescope, Paranal, Chile, program 175.A-0839.

  4. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  5. A Pivotal Study of Optoacoustic Imaging to Diagnose Benign and Malignant Breast Masses: A New Evaluation Tool for Radiologists.

    PubMed

    Neuschler, Erin I; Butler, Reni; Young, Catherine A; Barke, Lora D; Bertrand, Margaret L; Böhm-Vélez, Marcela; Destounis, Stamatia; Donlan, Pamela; Grobmyer, Stephen R; Katzen, Janine; Kist, Kenneth A; Lavin, Philip T; Makariou, Erini V; Parris, Tchaiko M; Schilling, Kathy J; Tucker, F Lee; Dogan, Basak E

    2018-05-01

    Purpose To compare the diagnostic utility of an investigational optoacoustic imaging device that fuses laser optical imaging (OA) with grayscale ultrasonography (US) to grayscale US alone in differentiating benign and malignant breast masses. Materials and Methods This prospective, 16-site study of 2105 women (study period: 12/21/2012 to 9/9/2015) compared Breast Imaging Reporting and Data System (BI-RADS) categories assigned by seven blinded independent readers to benign and malignant breast masses using OA/US versus US alone. BI-RADS 3, 4, or 5 masses assessed at diagnostic US with biopsy-proven histologic findings and BI-RADS 3 masses stable at 12 months were eligible. Independent readers reviewed US images obtained with the OA/US device, assigned a probability of malignancy (POM) and BI-RADS category, and locked results. The same independent readers then reviewed OA/US images, scored OA features, and assigned OA/US POM and a BI-RADS category. Specificity and sensitivity were calculated for US and OA/US. Benign and malignant mass upgrade and downgrade rates, positive and negative predictive values, and positive and negative likelihood ratios were compared. Results Of 2105 consented subjects with 2191 masses, 100 subjects (103 masses) were analyzed separately as a training population and excluded. An additional 202 subjects (210 masses) were excluded due to technical failures or incomplete imaging, 72 subjects (78 masses) due to protocol deviations, and 41 subjects (43 masses) due to high-risk histologic results. Of 1690 subjects with 1757 masses (1079 [61.4%] benign and 678 [38.6%] malignant masses), OA/US downgraded 40.8% (3078/7535) of benign mass reads, with a specificity of 43.0% (3242/7538, 99% confidence interval [CI]: 40.4%, 45.7%) for OA/US versus 28.1% (2120/7543, 99% CI: 25.8%, 30.5%) for the internal US of the OA/US device. OA/US exceeded US in specificity by 14.9% (P < .0001; 99% CI: 12.9, 16.9%). Sensitivity for biopsied malignant masses was 96.0% (4553/4745, 99% CI: 94.5%, 97.0%) for OA/US and 98.6% (4680/4746, 99% CI: 97.8%, 99.1%) for US (P < .0001). The negative likelihood ratio of 0.094 for OA/US indicates a negative examination can reduce a maximum US-assigned pretest probability of 17.8% (low BI-RADS 4B) to a posttest probability of 2% (BI-RADS 3). Conclusion OA/US increases the specificity of breast mass assessment compared with the device internal grayscale US alone. Online supplemental material is available for this article. © RSNA, 2017.

  6. A mass census of the nearby universe with the RESOLVE survey

    NASA Astrophysics Data System (ADS)

    Eckert, Kathleen

    The galaxy mass function, i.e., the distribution of galaxies as a function of mass, is a useful way to characterize the galaxy population. In this work, we examine the stellar and baryonic mass function, and the velocity function of galaxies and galaxy groups for two volume-limited surveys of the nearby universe. Stellar masses are estimated from multi-band photometry, and we add cold atomic gas from measurements and a newly calibrated estimator to obtain baryonic mass. Velocities are measured from the internal motions of galaxies and groups and account for all matter within the system. We compare our observed mass and velocity functions with the halo mass function from theoretical simulations of dark matter, which predict a much more steeply rising low-mass slope than is normally observed for the galaxy mass function. We show that taking into account the cold gas mass, which dominates the directly detectable mass of low-mass galaxies, steepens the low-mass slope of the galaxy mass function. The low- mass slope of the baryonic mass function, however, is still much shallower than that of the halo mass function. The discrepancy in low-mass slope persists when examining the velocity function, which accounts for all matter in galaxies (detectable or not), suggesting that some mechanism must reduce the mass in halos or destroy them completely. We investigate the role of environment by performing group finding and examining the mass and velocity functions as a function of group halo mass. Broken down by halo mass regime, we find dips and varying low-mass slopes in the mass and velocity functions, suggesting that group formation processes such as merging and stripping, which destroy and lower the mass of low-mass satellites respectively, potentially contribute to the discrepancy in low-mass slope. In particular, we focus on the nascent group regime, groups of mass 10 11.4-12 [solar mass] with few members, which has a depressed and flat low-mass slope in the galaxy mass and velocity function. We find that nascent groups are at the peak baryonic collapse efficiency (group-integrated cold baryonic mass divided by the group halo mass), while isolated dwarfs in lower mass halos are rapidly growing in their collapsed baryonic mass and larger groups are increasingly dominated by their hot halo gas. Scatter in this collapsed baryon efficiency could indicate varying hot gas fractions in nascent groups, suggestive of a wide variety of group formation processes occurring at these scales. We point to this nascent group regime as a period of transition in group evolution, where merging and stripping remove galaxies from the population, contributing to the discrepancy in low-mass slope between observations and dark matter simulations.

  7. Late-life factors associated with healthy aging in older men.

    PubMed

    Bell, Christina L; Chen, Randi; Masaki, Kamal; Yee, Priscilla; He, Qimei; Grove, John; Donlon, Timothy; Curb, J David; Willcox, D Craig; Poon, Leonard W; Willcox, Bradley J

    2014-05-01

    To identify potentially modifiable late-life biological, lifestyle, and sociodemographic factors associated with overall and healthy survival to age 85. Prospective longitudinal cohort study with 21 years of follow-up (1991-2012). Hawaii Lifespan Study. American men of Japanese ancestry (mean age 75.7, range 71-82) without baseline major clinical morbidity and functional impairments (N = 1,292). Overall survival and healthy survival (free from six major chronic diseases and without physical or cognitive impairment) to age 85. Factors were measured at late-life baseline examinations (1991-1993). Of 1,292 participants, 1,000 (77%) survived to 85 (34% healthy) and 309 (24%) to 95 (<1% healthy). Late-life factors associated with survival and healthy survival included biological (body mass index, ankle-brachial index, cognitive score, blood pressure, inflammatory markers), lifestyle (smoking, alcohol use, physical activity), and sociodemographic factors (education, marital status). Cumulative late-life baseline risk factor models demonstrated that age-standardized (at 70) probability of survival to 95 ranged from 27% (no factors) to 7% (≥ 5 factors); probability of survival to 100 ranged from 4% (no factors) to 0.1% (≥ 5 factors). Age-standardized (at 70) probability of healthy survival to 90 ranged from 4% (no factors) to 0.01% (≥ 5 factors). There were nine healthy survivors at 95 and one healthy survivor at 100. Several potentially modifiable risk factors in men in late life (mean age 75.7) were associated with markedly greater probability of subsequent healthy survival and longevity. © 2014, Copyright the Authors Journal compilation © 2014, The American Geriatrics Society.

  8. Unusual positional effects on flower sex in an andromonoecious tree: Resource competition, architectural constraints, or inhibition by the apical flower?

    PubMed

    Granado-Yela, Carlos; Balaguer, Luis; Cayuela, Luis; Méndez, Marcos

    2017-04-01

    Two, nonmutually exclusive, mechanisms-competition for resources and architectural constraints-have been proposed to explain the proximal to distal decline in flower size, mass, and/or femaleness in indeterminate, elongate inflorescences. Whether these mechanisms also explain unusual positional effects such as distal to proximal declines of floral performance in determinate inflorescences, is understudied. We tested the relative influence of these mechanisms in the andromonoecious wild olive tree, where hermaphroditic flowers occur mainly on apical and the most proximal positions in determinate inflorescences. We experimentally increased the availability of resources for the inflorescences by removing half of the inflorescences per twig or reduced resource availability by removing leaves. We also removed the apical flower to test its inhibitory effect on subapical flowers. The apical flower had the highest probability of being hermaphroditic. Further down, however, the probability of finding a hermaphroditic flower decreased from the base to the tip of the inflorescences. An experimental increase of resources increased the probability of finding hermaphroditic flowers at each position, and vice versa. Removal of the apical flower increased the probability of producing hermaphroditic flowers in proximal positions but not in subapical positions. These results indicate an interaction between resource competition and architectural constraints in influencing the arrangement of the hermaphroditic and male flowers within the inflorescences of the wild olive tree. Subapical flowers did not seem to be hormonally suppressed by apical flowers. The study of these unusual positional effects is needed for a general understanding about the functional implications of inflorescence architecture. © 2017 Botanical Society of America.

  9. Delay, Probability, and Social Discounting in a Public Goods Game

    ERIC Educational Resources Information Center

    Jones, Bryan A.; Rachlin, Howard

    2009-01-01

    A human social discount function measures the value to a person of a reward to another person at a given social distance. Just as delay discounting is a hyperbolic function of delay, and probability discounting is a hyperbolic function of odds-against, social discounting is a hyperbolic function of social distance. Experiment 1 obtained individual…

  10. Electron number probability distributions for correlated wave functions.

    PubMed

    Francisco, E; Martín Pendás, A; Blanco, M A

    2007-03-07

    Efficient formulas for computing the probability of finding exactly an integer number of electrons in an arbitrarily chosen volume are only known for single-determinant wave functions [E. Cances et al., Theor. Chem. Acc. 111, 373 (2004)]. In this article, an algebraic method is presented that extends these formulas to the case of multideterminant wave functions and any number of disjoint volumes. The derived expressions are applied to compute the probabilities within the atomic domains derived from the space partitioning based on the quantum theory of atoms in molecules. Results for a series of test molecules are presented, paying particular attention to the effects of electron correlation and of some numerical approximations on the computed probabilities.

  11. A mechanism producing power law etc. distributions

    NASA Astrophysics Data System (ADS)

    Li, Heling; Shen, Hongjun; Yang, Bin

    2017-07-01

    Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.

  12. Star Classification for the Kepler Input Catalog: From Images to Stellar Parameters

    NASA Astrophysics Data System (ADS)

    Brown, T. M.; Everett, M.; Latham, D. W.; Monet, D. G.

    2005-12-01

    The Stellar Classification Project is a ground-based effort to screen stars within the Kepler field of view, to allow removal of stars with large radii (and small potential transit signals) from the target list. Important components of this process are: (1) An automated photometry pipeline estimates observed magnitudes both for target stars and for stars in several calibration fields. (2) Data from calibration fields yield extinction-corrected AB magnitudes (with g, r, i, z magnitudes transformed to the SDSS system). We merge these with 2MASS J, H, K magnitudes. (3) The Basel grid of stellar atmosphere models yields synthetic colors, which are transformed to our photometric system by calibration against observations of stars in M67. (4) We combine the r magnitude and stellar galactic latitude with a simple model of interstellar extinction to derive a relation connecting {Teff, luminosity} to distance and reddening. For models satisfying this relation, we compute a chi-squared statistic describing the match between each model and the observed colors. (5) We create a merit function based on the chi-squared statistic, and on a Bayesian prior probability distribution which gives probability as a function of Teff, luminosity, log(Z), and height above the galactic plane. The stellar parameters ascribed to a star are those of the model that maximizes this merit function. (6) Parameter estimates are merged with positional and other information from extant catalogs to yield the Kepler Input Catalog, from which targets will be chosen. Testing and validation of this procedure are underway, with encouraging initial results.

  13. A Submillimeter Continuum Survey of Local Dust-obscured Galaxies

    NASA Astrophysics Data System (ADS)

    Lee, Jong Chul; Hwang, Ho Seong; Lee, Gwang-Ho

    2016-12-01

    We conduct a 350 μm dust continuum emission survey of 17 dust-obscured galaxies (DOGs) at z = 0.05-0.08 with the Caltech Submillimeter Observatory (CSO). We detect 14 DOGs with S 350 μm = 114-650 mJy and signal-to-noise > 3. By including two additional DOGs with submillimeter data in the literature, we are able to study dust content for a sample of 16 local DOGs, which consist of 12 bump and four power-law types. We determine their physical parameters with a two-component modified blackbody function model. The derived dust temperatures are in the range 57-122 K and 22-35 K for the warm and cold dust components, respectively. The total dust mass and the mass fraction of the warm dust component are 3-34 × 107 M ⊙ and 0.03%-2.52%, respectively. We compare these results with those of other submillimeter-detected infrared luminous galaxies. The bump DOGs, the majority of the DOG sample, show similar distributions of dust temperatures and total dust mass to the comparison sample. The power-law DOGs show a hint of smaller dust masses than other samples, but need to be tested with a larger sample. These findings support that the reason DOGs show heavy dust obscuration is not an overall amount of dust content, but probably the spatial distribution of dust therein.

  14. Mass and Momentum Turbulent Transport Experiments with Confined Coaxial Jets

    NASA Technical Reports Server (NTRS)

    Johnson, B. V.; Bennett, J. C.

    1981-01-01

    Downstream mixing of coaxial jets discharging in an expanded duct was studied to obtain data for the evaluation and improvement of turbulent transport models currently used in a variety of computational procedures throughout the propulsion community for combustor flow modeling. Flow visualization studies showed four major shear regions occurring; a wake region immediately downstream of the inlet jet inlet duct; a shear region further downstream between the inner and annular jets; a recirculation zone; and a reattachment zone. A combination of turbulent momentum transport rate and two velocity component data were obtained from simultaneous measurements with a two color laser velocimeter (LV) system. Axial, radial and azimuthal velocities and turbulent momentum transport rate measurements in the r-z and r-theta planes were used to determine the mean value, second central moment (or rms fluctuation from mean), skewness and kurtosis for each data set probability density function (p.d.f.). A combination of turbulent mass transport rate, concentration and velocity data were obtained system. Velocity and mass transport in all three directions as well as concentration distributions were used to obtain the mean, second central moments, skewness and kurtosis for each p.d.f. These LV/LIF measurements also exposed the existence of a large region of countergradient turbulent axial mass transport in the region where the annular jet fluid was accelerating the inner jet fluid.

  15. Search for the Standard Model Higgs Boson associated with a W Boson using Matrix Element Technique in the CDF detector at the Tevatron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, Barbara Alvarez

    In this thesis a direct search for the Standard Model Higgs boson production in association with a W boson at the CDF detector in the Tevatron is presented. This search contributes predominantly in the region of low mass Higgs region, when the mass of Higgs boson is less than about 135 GeV. The search is performed in a final state where the Higgs boson decays into two b quarks, and the W boson decays leptonically, to a charged lepton (it can be an electron or a muon) and a neutrino. This work is organized as follows. Chapter 2 gives an overview of the Standard Model theory of particle physics and presents the SM Higgs boson search results at LEP, and the Tevatron colliders, as well as the prospects for the SM Higgs boson searches at the LHC. The dataset used in this analysis corresponds to 4.8 fb -1 of integrated luminosity of pmore » $$\\bar{p}$$ collisions at a center of mass energy of 1.96 TeV. That is the luminosity acquired between the beginning of the CDF Run II experiment, February 2002, and May 2009. The relevant aspects, for this analysis, of the Tevatron accelerator and the CDF detector are shown in Chapter 3. In Chapter 4 the particles and observables that make up the WH final state, electrons, muons, E T, and jets are presented. The CDF standard b-tagging algorithms to identify b jets, and the neural network flavor separator to distinguish them from other flavor jets are also described in Chapter 4. The main background contributions are those coming from heavy flavor production processes, such as those coming from Wbb, Wcc or Wc and tt. The signal and background signatures are discussed in Chapter 5 together with the Monte CArlo generators that have been used to simulate almost all the events used in this thesis. WH candidate events have a high-p T lepton (electron or muon), high missing transverse energy, and two or more than two jets in the final state. Chapter 6 describes the event selection applied in this analysis and the method used to estimate the background contribution. The Matrix Element method, that was successfully used in the single top discovery analysis and many other analyses within the CDF collaboration, is the multivariate technique used in this thesis to discriminate signal from background events. With this technique is possible to calculate a probability for an event to be classified as signal or background. These probabilities are then combined into a discriminant function called the Event Probability Discriminant, EPD, which increases the sensitivity of the WH process. This method is described in detail in Chapter 7. As no evidence for the signal has been found, the results obtained with this work are presented in Chapter 8 in terms of exclusion regions as a function of the mass of the Higgs boso, taking into account the full systematics. The conclusions of this work to obtain the PhD are presnted in Chapter 9.« less

  16. A new, high-resolution global mass coral bleaching database

    PubMed Central

    Rickbeil, Gregory J. M.; Heron, Scott F.

    2017-01-01

    Episodes of mass coral bleaching have been reported in recent decades and have raised concerns about the future of coral reefs on a warming planet. Despite the efforts to enhance and coordinate coral reef monitoring within and across countries, our knowledge of the geographic extent of mass coral bleaching over the past few decades is incomplete. Existing databases, like ReefBase, are limited by the voluntary nature of contributions, geographical biases in data collection, and the variations in the spatial scale of bleaching reports. In this study, we have developed the first-ever gridded, global-scale historical coral bleaching database. First, we conducted a targeted search for bleaching reports not included in ReefBase by personally contacting scientists and divers conducting monitoring in under-reported locations and by extracting data from the literature. This search increased the number of observed bleaching reports by 79%, from 4146 to 7429. Second, we employed spatial interpolation techniques to develop annual 0.04° × 0.04° latitude-longitude global maps of the probability that bleaching occurred for 1985 through 2010. Initial results indicate that the area of coral reefs with a more likely than not (>50%) or likely (>66%) probability of bleaching was eight times higher in the second half of the assessed time period, after the 1997/1998 El Niño. The results also indicate that annual maximum Degree Heating Weeks, a measure of thermal stress, for coral reefs with a high probability of bleaching increased over time. The database will help the scientific community more accurately assess the change in the frequency of mass coral bleaching events, validate methods of predicting mass coral bleaching, and test whether coral reefs are adjusting to rising ocean temperatures. PMID:28445534

  17. A temperate rocky super-Earth transiting a nearby cool star

    NASA Astrophysics Data System (ADS)

    Dittmann, Jason A.; Irwin, Jonathan M.; Charbonneau, David; Bonfils, Xavier; Astudillo-Defru, Nicola; Haywood, Raphaëlle D.; Berta-Thompson, Zachory K.; Newton, Elisabeth R.; Rodriguez, Joseph E.; Winters, Jennifer G.; Tan, Thiam-Guan; Almenara, Jose-Manuel; Bouchy, François; Delfosse, Xavier; Forveille, Thierry; Lovis, Christophe; Murgas, Felipe; Pepe, Francesco; Santos, Nuno C.; Udry, Stephane; Wünsche, Anaël; Esquerdo, Gilbert A.; Latham, David W.; Dressing, Courtney D.

    2017-04-01

    M dwarf stars, which have masses less than 60 per cent that of the Sun, make up 75 per cent of the population of the stars in the Galaxy. The atmospheres of orbiting Earth-sized planets are observationally accessible via transmission spectroscopy when the planets pass in front of these stars. Statistical results suggest that the nearest transiting Earth-sized planet in the liquid-water, habitable zone of an M dwarf star is probably around 10.5 parsecs away. A temperate planet has been discovered orbiting Proxima Centauri, the closest M dwarf, but it probably does not transit and its true mass is unknown. Seven Earth-sized planets transit the very low-mass star TRAPPIST-1, which is 12 parsecs away, but their masses and, particularly, their densities are poorly constrained. Here we report observations of LHS 1140b, a planet with a radius of 1.4 Earth radii transiting a small, cool star (LHS 1140) 12 parsecs away. We measure the mass of the planet to be 6.6 times that of Earth, consistent with a rocky bulk composition. LHS 1140b receives an insolation of 0.46 times that of Earth, placing it within the liquid-water, habitable zone. With 90 per cent confidence, we place an upper limit on the orbital eccentricity of 0.29. The circular orbit is unlikely to be the result of tides and therefore was probably present at formation. Given its large surface gravity and cool insolation, the planet may have retained its atmosphere despite the greater luminosity (compared to the present-day) of its host star in its youth. Because LHS 1140 is nearby, telescopes currently under construction might be able to search for specific atmospheric gases in the future.

  18. A new, high-resolution global mass coral bleaching database.

    PubMed

    Donner, Simon D; Rickbeil, Gregory J M; Heron, Scott F

    2017-01-01

    Episodes of mass coral bleaching have been reported in recent decades and have raised concerns about the future of coral reefs on a warming planet. Despite the efforts to enhance and coordinate coral reef monitoring within and across countries, our knowledge of the geographic extent of mass coral bleaching over the past few decades is incomplete. Existing databases, like ReefBase, are limited by the voluntary nature of contributions, geographical biases in data collection, and the variations in the spatial scale of bleaching reports. In this study, we have developed the first-ever gridded, global-scale historical coral bleaching database. First, we conducted a targeted search for bleaching reports not included in ReefBase by personally contacting scientists and divers conducting monitoring in under-reported locations and by extracting data from the literature. This search increased the number of observed bleaching reports by 79%, from 4146 to 7429. Second, we employed spatial interpolation techniques to develop annual 0.04° × 0.04° latitude-longitude global maps of the probability that bleaching occurred for 1985 through 2010. Initial results indicate that the area of coral reefs with a more likely than not (>50%) or likely (>66%) probability of bleaching was eight times higher in the second half of the assessed time period, after the 1997/1998 El Niño. The results also indicate that annual maximum Degree Heating Weeks, a measure of thermal stress, for coral reefs with a high probability of bleaching increased over time. The database will help the scientific community more accurately assess the change in the frequency of mass coral bleaching events, validate methods of predicting mass coral bleaching, and test whether coral reefs are adjusting to rising ocean temperatures.

  19. MIGRATION AND GROWTH OF PROTOPLANETARY EMBRYOS. II. EMERGENCE OF PROTO-GAS-GIANT CORES VERSUS SUPER EARTH PROGENITORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Beibei; Zhang, Xiaojia; Lin, Douglas N. C.

    2015-01-01

    Nearly 15%-20% of solar type stars contain one or more gas giant planets. According to the core-accretion scenario, the acquisition of their gaseous envelope must be preceded by the formation of super-critical cores with masses 10 times or larger than that of the Earth. It is natural to link the formation probability of gas giant planets with the supply of gases and solids in their natal disks. However, a much richer population of super Earths suggests that (1) there is no shortage of planetary building block material, (2) a gas giant's growth barrier is probably associated with whether it can mergemore » into super-critical cores, and (3) super Earths are probably failed cores that did not attain sufficient mass to initiate efficient accretion of gas before it is severely depleted. Here we construct a model based on the hypothesis that protoplanetary embryos migrated extensively before they were assembled into bona fide planets. We construct a Hermite-Embryo code based on a unified viscous-irradiation disk model and a prescription for the embryo-disk tidal interaction. This code is used to simulate the convergent migration of embryos, and their close encounters and coagulation. Around the progenitors of solar-type stars, the progenitor super-critical-mass cores of gas giant planets primarily form in protostellar disks with relatively high (≳ 10{sup –7} M {sub ☉} yr{sup –1}) mass accretion rates, whereas systems of super Earths (failed cores) are more likely to emerge out of natal disks with modest mass accretion rates, due to the mean motion resonance barrier and retention efficiency.« less

  20. A temperate rocky super-Earth transiting a nearby cool star.

    PubMed

    Dittmann, Jason A; Irwin, Jonathan M; Charbonneau, David; Bonfils, Xavier; Astudillo-Defru, Nicola; Haywood, Raphaëlle D; Berta-Thompson, Zachory K; Newton, Elisabeth R; Rodriguez, Joseph E; Winters, Jennifer G; Tan, Thiam-Guan; Almenara, Jose-Manuel; Bouchy, François; Delfosse, Xavier; Forveille, Thierry; Lovis, Christophe; Murgas, Felipe; Pepe, Francesco; Santos, Nuno C; Udry, Stephane; Wünsche, Anaël; Esquerdo, Gilbert A; Latham, David W; Dressing, Courtney D

    2017-04-19

    M dwarf stars, which have masses less than 60 per cent that of the Sun, make up 75 per cent of the population of the stars in the Galaxy. The atmospheres of orbiting Earth-sized planets are observationally accessible via transmission spectroscopy when the planets pass in front of these stars. Statistical results suggest that the nearest transiting Earth-sized planet in the liquid-water, habitable zone of an M dwarf star is probably around 10.5 parsecs away. A temperate planet has been discovered orbiting Proxima Centauri, the closest M dwarf, but it probably does not transit and its true mass is unknown. Seven Earth-sized planets transit the very low-mass star TRAPPIST-1, which is 12 parsecs away, but their masses and, particularly, their densities are poorly constrained. Here we report observations of LHS 1140b, a planet with a radius of 1.4 Earth radii transiting a small, cool star (LHS 1140) 12 parsecs away. We measure the mass of the planet to be 6.6 times that of Earth, consistent with a rocky bulk composition. LHS 1140b receives an insolation of 0.46 times that of Earth, placing it within the liquid-water, habitable zone. With 90 per cent confidence, we place an upper limit on the orbital eccentricity of 0.29. The circular orbit is unlikely to be the result of tides and therefore was probably present at formation. Given its large surface gravity and cool insolation, the planet may have retained its atmosphere despite the greater luminosity (compared to the present-day) of its host star in its youth. Because LHS 1140 is nearby, telescopes currently under construction might be able to search for specific atmospheric gases in the future.

  1. Potential postwildfire debris-flow hazards - A prewildfire evaluation for the Jemez Mountains, north-central New Mexico

    Treesearch

    Anne C. Tillery; Jessica Haas

    2016-01-01

    Wildfire can substantially increase the probability of debris flows, a potentially hazardous and destructive form of mass wasting, in landscapes that have otherwise been stable throughout recent history. Although the exact location, extent, and severity of wildfire or subsequent rainfall intensity and duration cannot be known, probabilities of fire and debris‑flow...

  2. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  3. Seasonal variations and source apportionment of atmospheric PM2.5-bound polycyclic aromatic hydrocarbons in a mixed multi-function area of Hangzhou, China.

    PubMed

    Lu, Hao; Wang, Shengsheng; Li, Yun; Gong, Hui; Han, Jingyi; Wu, Zuliang; Yao, Shuiliang; Zhang, Xuming; Tang, Xiujuan; Jiang, Boqiong

    2017-07-01

    To reveal the seasonal variations and sources of PM 2.5 -bound polycyclic aromatic hydrocarbons (PAHs) during haze and non-haze episodes, daily PM 2.5 samples were collected from March 2015 to February 2016 in a mixed multi-function area in Hangzhou, China. Ambient concentrations of 16 priority-controlled PAHs were determined. The sums of PM 2.5 -bound PAH concentrations during the haze episodes were 4.52 ± 3.32 and 13.6 ± 6.29 ng m -3 in warm and cold seasons, respectively, which were 1.99 and 1.49 times those during the non-haze episodes. Four PAH sources were identified using the positive matrix factorization model and conditional probability function, which were vehicular emissions (45%), heavy oil combustion (23%), coal and natural gas combustion (22%), and biomass combustion (10%). The four source concentrations of PAHs consistently showed higher levels in the cold season, compared with those in the warm season. Vehicular emissions were the most considerable sources that result in the increase of PM 2.5 -bound PAH levels during the haze episodes, and heavy oil combustion played an important role in the aggravation of haze pollution. The analysis of air mass back trajectories indicated that air mass transport had an influence on the PM 2.5 -bound PAH pollution, especially on the increased contributions from coal combustion and vehicular emissions in the cold season.

  4. The Young L Dwarf 2MASS J11193254-1137466 Is a Planetary-mass Binary

    NASA Astrophysics Data System (ADS)

    Best, William M. J.; Liu, Michael C.; Dupuy, Trent J.; Magnier, Eugene A.

    2017-07-01

    We have discovered that the extremely red, low-gravity L7 dwarf 2MASS J11193254-1137466 is a 0.″14 (3.6 au) binary using Keck laser guide star adaptive optics imaging. 2MASS J11193254-1137466 has previously been identified as a likely member of the TW Hydrae Association (TWA). Using our updated photometric distance and proper motion, a kinematic analysis based on the BANYAN II model gives an 82% probability of TWA membership. At TWA’s 10 ± 3 Myr age and using hot-start evolutionary models, 2MASS J11193254-1137466AB is a pair of {3.7}-0.9+1.2 {M}{Jup} brown dwarfs, making it the lowest-mass binary discovered to date. We estimate an orbital period of {90}-50+80 years. One component is marginally brighter in K band but fainter in J band, making this a probable flux-reversal binary, the first discovered with such a young age. We also imaged the spectrally similar TWA L7 dwarf WISEA J114724.10-204021.3 with Keck and found no sign of binarity. Our evolutionary model-derived {T}{eff} estimate for WISEA J114724.10-204021.3 is ≈230 K higher than for 2MASS J11193254-1137466AB, at odds with the spectral similarity of the two objects. This discrepancy suggests that WISEA J114724.10-204021.3 may actually be a tight binary with masses and temperatures very similar to 2MASS J11193254-1137466AB, or further supporting the idea that near-infrared spectra of young ultracool dwarfs are shaped by factors other than temperature and gravity. 2MASS J11193254-1137466AB will be an essential benchmark for testing evolutionary and atmospheric models in the young planetary-mass regime.

  5. Know the risk, take the win: how executive functions and probability processing influence advantageous decision making under risk conditions.

    PubMed

    Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete

    2014-01-01

    Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.

  6. Low Mass Members in Nearby Young Moving Groups Revealed

    NASA Astrophysics Data System (ADS)

    Schlieder, Joshua; Simon, Michal; Rice, Emily; Lepine, Sebastien

    2010-08-01

    We are now ready to expand our program that identifies highly probable low-mass members of the nearby young moving groups (NYMGs) to stars of mass ~ 0.1 Msun. This is important 1) To provide high priority targets for exoplanet searches by direct imaging, 2) To complete the census of the membership in the NYMGs, and 3) To provide a well-characterized sample of nearby young stars for detailed study of their physical properties and multiplicity (the median distances of the (beta) Pic and AB Dor groups are ~ 35 pc with ages ~ 12 and 50 Myr respectively). Our proven technique starts with a proper motion selection algorithm, proceeds to vet the sample for indicators of youth, and requires as its last step the measurement of candidate member radial velocities (RVs). So far, we have obtained all RV measurements with the high resolution IR spectrometer at the NASA-IRTF and have reached the limits of its applicability. To identify probable new members in the south, and also those of the lowest mass, we need the sensitivity of PHOENIX at Gemini-S and NIRSPEC at Keck-II.

  7. Israeli adolescents with ongoing exposure to terrorism: suicidal ideation, posttraumatic stress disorder, and functional impairment.

    PubMed

    Chemtob, Claude M; Pat-Horenczyk, Ruth; Madan, Anita; Pitman, Seth R; Wang, Yanping; Doppelt, Osnat; Burns, Kelly Dugan; Abramovitz, Robert; Brom, Daniel

    2011-12-01

    In this study, we examined the relationships among terrorism exposure, functional impairment, suicidal ideation, and probable partial or full posttraumatic stress disorder (PTSD) from exposure to terrorism in adolescents continuously exposed to this threat in Israel. A convenience sample of 2,094 students, aged 12 to 18, was drawn from 10 Israeli secondary schools. In terms of demographic factors, older age was associated with increased risk for suicidal ideation, OR = 1.33, 95% CI [1.09, 1.62], p < .01, but was protective against probable partial or full PTSD, OR = 0.72, 95% CI [0.54, 0.95], p < .05; female gender was associated with greater likelihood of probable partial or full PTSD, OR = 1.57, 95% CI [1.02, 2.40], p < .05. Exposure to trauma due to terrorism was associated with increased risk for each of the measured outcomes including probable partial or full PTSD, functional impairment, and suicidal ideation. When age, gender, level of exposure to terrorism, probable partial or full PTSD, and functional impairment were examined together, only terrorism exposure and functional impairment were associated with suicidal ideation. This study underscores the importance and feasibility of examining exposure to terrorism and functional impairment as risk factors for suicidal ideation. Copyright © 2011 International Society for Traumatic Stress Studies.

  8. Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD

    PubMed Central

    Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne

    2014-01-01

    We recorded visual event-related brain potentials (ERPs) from 32 adult male participants (16 high-functioning participants diagnosed with Autism Spectrum Disorder (ASD) and 16 control participants, ranging in age from 18–53 yrs) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability conditions, whereas the probability of a third non-target stimulus was held constant in all conditions. P3 amplitude to target stimuli was more sensitive to probability in ASD than in TD participants, whereas P3 amplitude to non-target stimuli was less responsive to probability in ASD participants. This suggests that neural responses to changes in event probability are attention-dependant in high-functioning ASD. The implications of these findings for higher-level behaviors such as prediction and planning are discussed. PMID:24488156

  9. ADIABATIC MASS LOSS IN BINARY STARS. II. FROM ZERO-AGE MAIN SEQUENCE TO THE BASE OF THE GIANT BRANCH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Hongwei; Chen, Xuefei; Han, Zhanwen

    2015-10-10

    In the limit of extremely rapid mass transfer, the response of a donor star in an interacting binary becomes asymptotically one of adiabatic expansion. We survey here adiabatic mass loss from Population I stars (Z = 0.02) of mass 0.10 M{sub ⊙}–100 M{sub ⊙} from the zero-age main sequence to the base of the giant branch, or to central hydrogen exhaustion for lower main sequence stars. The logarithmic derivatives of radius with respect to mass along adiabatic mass-loss sequences translate into critical mass ratios for runaway (dynamical timescale) mass transfer, evaluated here under the assumption of conservative mass transfer. Formore » intermediate- and high-mass stars, dynamical mass transfer is preceded by an extended phase of thermal timescale mass transfer as the star is stripped of most of its envelope mass. The critical mass ratio q{sub ad} (throughout this paper, we follow the convention of defining the binary mass ratio as q ≡ M{sub donor}/M{sub accretor}) above which this delayed dynamical instability occurs increases with advancing evolutionary age of the donor star, by ever-increasing factors for more massive donors. Most intermediate- or high-mass binaries with nondegenerate accretors probably evolve into contact before manifesting this instability. As they approach the base of the giant branch, however, and begin developing a convective envelope, q{sub ad} plummets dramatically among intermediate-mass stars, to values of order unity, and a prompt dynamical instability occurs. Among low-mass stars, the prompt instability prevails throughout main sequence evolution, with q{sub ad} declining with decreasing mass, and asymptotically approaching q{sub ad} = 2/3, appropriate to a classical isentropic n = 3/2 polytrope. Our calculated q{sub ad} values agree well with the behavior of time-dependent models by Chen and Han of intermediate-mass stars initiating mass transfer in the Hertzsprung gap. Application of our results to cataclysmic variables, as systems that must be stable against rapid mass transfer, nicely circumscribes the range in q{sub ad} as a function of the orbital period in which they are found. These results are intended to advance the verisimilitude of population synthesis models of close binary evolution.« less

  10. Tandem mass spectrometry of human tryptic blood peptides calculated by a statistical algorithm and captured by a relational database with exploration by a general statistical analysis system.

    PubMed

    Bowden, Peter; Beavis, Ron; Marshall, John

    2009-11-02

    A goodness of fit test may be used to assign tandem mass spectra of peptides to amino acid sequences and to directly calculate the expected probability of mis-identification. The product of the peptide expectation values directly yields the probability that the parent protein has been mis-identified. A relational database could capture the mass spectral data, the best fit results, and permit subsequent calculations by a general statistical analysis system. The many files of the Hupo blood protein data correlated by X!TANDEM against the proteins of ENSEMBL were collected into a relational database. A redundant set of 247,077 proteins and peptides were correlated by X!TANDEM, and that was collapsed to a set of 34,956 peptides from 13,379 distinct proteins. About 6875 distinct proteins were only represented by a single distinct peptide, 2866 proteins showed 2 distinct peptides, and 3454 proteins showed at least three distinct peptides by X!TANDEM. More than 99% of the peptides were associated with proteins that had cumulative expectation values, i.e. probability of false positive identification, of one in one hundred or less. The distribution of peptides per protein from X!TANDEM was significantly different than those expected from random assignment of peptides.

  11. Nematode Damage Functions: The Problems of Experimental and Sampling Error

    PubMed Central

    Ferris, H.

    1984-01-01

    The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865

  12. The Orbit of X Persei and Its Neutron Star Companion

    NASA Astrophysics Data System (ADS)

    Delgado-Martí, Hugo; Levine, Alan M.; Pfahl, Eric; Rappaport, Saul A.

    2001-01-01

    We have observed the Be/X-ray pulsar binary system X Per/4U 0352+30 on 61 occasions spanning an interval of 600 days with the PCA instrument on board the Rossi X-Ray Timing Explorer (RXTE). Pulse timing analyses of the 837 s pulsations yield strong evidence for the presence of orbital Doppler delays. We confirm the Doppler delays by using measurements made with the All Sky Monitor (ASM) on RXTE. We infer that the orbit is characterized by a period Porb=250 days, a projected semimajor axis of the neutron star axsini=454 lt-s, a mass function f(M)=1.61 Msolar, and a modest eccentricity e=0.11. The measured orbital parameters, together with the known properties of the classical Be star X Per, imply a semimajor axis a=1.8-2.2 AU and an orbital inclination i~26deg-33deg. We discuss the formation of the system in the context of the standard evolutionary scenario for Be/X-ray binaries. We find that the system most likely formed from a pair of massive progenitor stars and probably involved a quasi-stable and nearly conservative transfer of mass from the primary to the secondary. We find that the He star remnant of the primary most likely had a mass <~6 Msolar after mass transfer. If the supernova explosion was completely symmetric, then the present orbital eccentricity indicates that <~4 Msolar was ejected from the binary. If, on the other hand, the neutron star received at birth a ``kick'' of the type often inferred from the velocity distribution of isolated radio pulsars, then the resultant orbital eccentricity would likely have been substantially larger than 0.11. We have carried out a Monte Carlo study of the effects of such natal kicks and find that there is less than a 1% probability of a system like that of X Per forming with an orbital eccentricity e<~0.11. We speculate that there may be a substantial population of neutron stars formed with little or no kick. Finally, we discuss the connected topics of the wide orbit and accretion by the neutron star from a stellar wind.

  13. MO-FG-CAMPUS-TeP2-04: Optimizing for a Specified Target Coverage Probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, A

    2016-06-15

    Purpose: The purpose of this work is to develop a method for inverse planning of radiation therapy margins. When using this method the user specifies a desired target coverage probability and the system optimizes to meet the demand without any explicit specification of margins to handle setup uncertainty. Methods: The method determines which voxels to include in an optimization function promoting target coverage in order to achieve a specified target coverage probability. Voxels are selected in a way that retains the correlation between them: The target is displaced according to the setup errors and the voxels to include are selectedmore » as the union of the displaced target regions under the x% best scenarios according to some quality measure. The quality measure could depend on the dose to the considered structure alone or could depend on the dose to multiple structures in order to take into account correlation between structures. Results: A target coverage function was applied to the CTV of a prostate case with prescription 78 Gy and compared to conventional planning using a DVH function on the PTV. Planning was performed to achieve 90% probability of CTV coverage. The plan optimized using the coverage probability function had P(D98 > 77.95 Gy) = 0.97 for the CTV. The PTV plan using a constraint on minimum DVH 78 Gy at 90% had P(D98 > 77.95) = 0.44 for the CTV. To match the coverage probability optimization, the DVH volume parameter had to be increased to 97% which resulted in 0.5 Gy higher average dose to the rectum. Conclusion: Optimizing a target coverage probability is an easily used method to find a margin that achieves the desired coverage probability. It can lead to reduced OAR doses at the same coverage probability compared to planning with margins and DVH functions.« less

  14. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  15. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  16. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  17. Simple gain probability functions for large reflector antennas of JPL/NASA

    NASA Technical Reports Server (NTRS)

    Jamnejad, V.

    2003-01-01

    Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.

  18. Measuring the stellar luminosity function and spatial density profile of the inner 0.5 pc of the Milky Way nuclear star cluster

    NASA Astrophysics Data System (ADS)

    Do, Tuan; Ghez, Andrea; Lu, Jessica R.; Morris, Mark R.; Yelda, Sylvana; Martinez, Gregory D.; Peter, Annika H. G.; Wright, Shelley; Bullock, James; Kaplinghat, Manoj; Matthews, K.

    2012-07-01

    We report on measurements of the luminosity function of early (young) and late-type (old) stars in the central 0.5 pc of the Milky Way nuclear star cluster as well as the density profiles of both components. The young (~ 6 Myr) and old stars (> 1 Gyr) in this region provide different physical probes of the environment around a supermassive black hole; the luminosity function of the young stars offers us a way to measure the initial mass function from star formation in an extreme environment, while the density profile of the old stars offers us a probe of the dynamical interaction of a star cluster with a massive black hole. The two stellar populations are separated through a near-infrared spectroscopic survey using the integral-field spectrograph OSIRIS on Keck II behind the laser guide star adaptive optics system. This spectroscopic survey is able to separate early-type (young) and late-type (old) stars with a completeness of 50% at K' = 15.5. We describe our method of completeness correction using a combination of star planting simulations and Bayesian inference. The completeness corrected luminosity function of the early-type stars contains significantly more young stars at faint magnitudes compared to previous surveys with similar depth. In addition, by using proper motion and radial velocity measurements along with anisotropic spherical Jeans modeling of the cluster, it is possible to measure the spatial density profile of the old stars, which has been difficult to constrain with number counts alone. The most probable model shows that the spatial density profile, n(r) propto r-γ, to be shallow with γ = 0.4 ± 0.2, which is much flatter than the dynamically relaxed case of γ = 3/2 to 7/4, but does rule out a 'hole' in the distribution of old stars. We show, for the first time, that the spatial density profile, the black hole mass, and velocity anisotropy can be fit simultaneously to obtain a black hole mass that is consistent with that derived from individual orbits of stars at distances < 1000 AU from the Galactic center.

  19. Probabilistic Component Mode Synthesis of Nondeterministic Substructures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1996-01-01

    Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. We present a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.

  20. Experiments and Demonstrations in Physics: Bar-Ilan Physics Laboratory (2nd Edition)

    NASA Astrophysics Data System (ADS)

    Kraftmakher, Yaakov

    2014-08-01

    The following sections are included: * Data-acquisition systems from PASCO * ScienceWorkshop 750 Interface and DataStudio software * 850 Universal Interface and Capstone software * Mass on spring * Torsional pendulum * Hooke's law * Characteristics of DC source * Digital storage oscilloscope * Charging and discharging a capacitor * Charge and energy stored in a capacitor * Speed of sound in air * Lissajous patterns * I-V characteristics * Light bulb * Short time intervals * Temperature measurements * Oersted's great discovery * Magnetic field measurements * Magnetic force * Magnetic braking * Curie's point I * Electric power in AC circuits * Faraday's law of induction I * Self-inductance and mutual inductance * Electromagnetic screening * LCR circuit I * Coupled LCR circuits * Probability functions * Photometric laws * Kirchhoff's rule for thermal radiation * Malus' law * Infrared radiation * Irradiance and illuminance

  1. Exact calculation of loop formation probability identifies folding motifs in RNA secondary structures.

    PubMed

    Sloma, Michael F; Mathews, David H

    2016-12-01

    RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. © 2016 Sloma and Mathews; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  2. Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection

    NASA Astrophysics Data System (ADS)

    Amiri, Ali; Fathy, Mahmood

    2010-12-01

    This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.

  3. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  4. Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior

    PubMed Central

    Borrero, Carrie S.W; Borrero, John C

    2008-01-01

    We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential precursor was greater given problem behavior compared to the unconditional probability of the potential precursor. Results of the lag-sequential analyses showed a marked increase in the probability of a potential precursor in the 1-s intervals immediately preceding an instance of problem behavior, and that the probability of problem behavior was highest in the 1-s intervals immediately following an instance of the precursor. We then conducted separate functional analyses of problem behavior and the precursor to identify respective operant functions. Results of the functional analyses showed that both problem behavior and the precursor served the same operant functions. These results replicate prior experimental analyses on the relation between problem behavior and precursors and extend prior research by illustrating a quantitative method to identify precursors to more severe problem behavior. PMID:18468281

  5. Cross-Sectional Relationships of Physical Activity and Sedentary Behavior With Cognitive Function in Older Adults With Probable Mild Cognitive Impairment.

    PubMed

    Falck, Ryan S; Landry, Glenn J; Best, John R; Davis, Jennifer C; Chiu, Bryan K; Liu-Ambrose, Teresa

    2017-10-01

    Mild cognitive impairment (MCI) represents a transition between normal cognitive aging and dementia and may represent a critical time frame for promoting cognitive health through behavioral strategies. Current evidence suggests that physical activity (PA) and sedentary behavior are important for cognition. However, it is unclear whether there are differences in PA and sedentary behavior between people with probable MCI and people without MCI or whether the relationships of PA and sedentary behavior with cognitive function differ by MCI status. The aims of this study were to examine differences in PA and sedentary behavior between people with probable MCI and people without MCI and whether associations of PA and sedentary behavior with cognitive function differed by MCI status. This was a cross-sectional study. Physical activity and sedentary behavior in adults dwelling in the community (N = 151; at least 55 years old) were measured using a wrist-worn actigraphy unit. The Montreal Cognitive Assessment was used to categorize participants with probable MCI (scores of <26/30) and participants without MCI (scores of ≥26/30). Cognitive function was indexed using the Alzheimer Disease Assessment Scale-Cognitive-Plus (ADAS-Cog Plus). Physical activity and sedentary behavior were compared based on probable MCI status, and relationships of ADAS-Cog Plus with PA and sedentary behavior were examined by probable MCI status. Participants with probable MCI (n = 82) had lower PA and higher sedentary behavior than participants without MCI (n = 69). Higher PA and lower sedentary behavior were associated with better ADAS-Cog Plus performance in participants without MCI (β = -.022 and β = .012, respectively) but not in participants with probable MCI (β < .001 for both). This study was cross-sectional and therefore could not establish whether conversion to MCI attenuated the relationships of PA and sedentary behavior with cognitive function. The diagnosis of MCI was not confirmed with a physician; therefore, this study could not conclude how many of the participants categorized as having probable MCI would actually have been diagnosed with MCI by a physician. Participants with probable MCI were less active and more sedentary. The relationships of these behaviors with cognitive function differed by MCI status; associations were found only in participants without MCI. © 2017 American Physical Therapy Association

  6. An Upper Bound on Neutron Star Masses from Models of Short Gamma-Ray Bursts

    NASA Astrophysics Data System (ADS)

    Lawrence, Scott; Tervala, Justin G.; Bedaque, Paulo F.; Miller, M. Coleman

    2015-08-01

    The discovery of two neutron stars with gravitational masses ≈ 2 {M}⊙ has placed a strong lower limit on the maximum mass of nonrotating neutron stars, and with it a strong constraint on the properties of cold matter beyond nuclear density. Current upper mass limits are much looser. Here, we note that if most short gamma-ray bursts are produced by the coalescence of two neutron stars, and if the merger remnant collapses quickly, then the upper mass limit is constrained tightly. If the rotation of the merger remnant is limited only by mass-shedding (which seems probable based on numerical studies), then the maximum gravitational mass of a nonrotating neutron star is ≈ 2-2.2 {M}⊙ if the masses of neutron stars that coalesce to produce gamma-ray bursts are in the range seen in Galactic double neutron star systems. These limits would be increased by ˜4% in the probably unrealistic case that the remnants rotate at ˜30% below mass-shedding, and by ˜15% in the extreme case that the remnants do not rotate at all. Future coincident detection of short gamma-ray bursts with gravitational waves will strengthen these arguments because they will produce tight bounds on the masses of the components for individual events. If these limits are accurate, then a reasonable fraction of double neutron star mergers might not produce gamma-ray bursts. In that case, or in the case that many short bursts are produced instead by the mergers of neutron stars with black holes, the implied rate of gravitational wave detections will be increased.

  7. Modeling of Disordered Binary Alloys Under Thermal Forcing: Effect of Nanocrystallite Dissociation on Thermal Expansion of AuCu3

    NASA Astrophysics Data System (ADS)

    Kim, Y. W.; Cress, R. P.

    2016-11-01

    Disordered binary alloys are modeled as a randomly close-packed assembly of nanocrystallites intermixed with randomly positioned atoms, i.e., glassy-state matter. The nanocrystallite size distribution is measured in a simulated macroscopic medium in two dimensions. We have also defined, and measured, the degree of crystallinity as the probability of a particle being a member of nanocrystallites. Both the distribution function and the degree of crystallinity are found to be determined by alloy composition. When heated, the nanocrystallites become smaller in size due to increasing thermal fluctuation. We have modeled this phenomenon as a case of thermal dissociation by means of the law of mass action. The crystallite size distribution function is computed for AuCu3 as a function of temperature by solving some 12 000 coupled algebraic equations for the alloy. The results show that linear thermal expansion of the specimen has contributions from the temperature dependence of the degree of crystallinity, in addition to respective thermal expansions of the nanocrystallites and glassy-state matter.

  8. The relationship study between image features and detection probability based on psychology experiments

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei

    2011-04-01

    Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively.

  9. Potential postwildfire debris-flow hazards - a prewildfire evaluation for the Sandia and Manzano Mountains and surrounding areas, central New Mexico

    Treesearch

    Anne C. Tillery; Jessica R. Haas; Lara W. Miller; Joe H. Scott; Matthew P. Thompson

    2014-01-01

    Wildfire can drastically increase the probability of debris flows, a potentially hazardous and destructive form of mass wasting, in landscapes that have otherwise been stable throughout recent history. Although there is no way to know the exact location, extent, and severity of wildfire, or the subsequent rainfall intensity and duration before it happens, probabilities...

  10. Negative values of quasidistributions and quantum wave and number statistics

    NASA Astrophysics Data System (ADS)

    Peřina, J.; Křepelka, J.

    2018-04-01

    We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.

  11. [Ecological executive function characteristics and effects of executive function on social adaptive function in school-aged children with epilepsy].

    PubMed

    Xu, X J; Wang, L L; Zhou, N

    2016-02-23

    To explore the characteristics of ecological executive function in school-aged children with idiopathic or probably symptomatic epilepsy and examine the effects of executive function on social adaptive function. A total of 51 school-aged children with idiopathic or probably symptomatic epilepsy aged 5-12 years at our hospital and 37 normal ones of the same gender, age and educational level were included. The differences in ecological executive function and social adaptive function were compared between the two groups with the Behavior Rating Inventory of Executive Function (BRIEF) and Child Adaptive Behavior Scale, the Pearson's correlation test and multiple stepwise linear regression were used to explore the impact of executive function on social adaptive function. The scores of school-aged children with idiopathic or probably symptomatic epilepsy in global executive composite (GEC), behavioral regulation index (BRI) and metacognition index (MI) of BRIEF ((62±12), (58±13) and (63±12), respectively) were significantly higher than those of the control group ((47±7), (44±6) and (48±8), respectively))(P<0.01). The scores of school-aged children with idiopathic or probably symptomatic epilepsy in adaptive behavior quotient (ADQ), independence, cognition, self-control ((86±22), (32±17), (49±14), (41±16), respectively) were significantly lower than those of the control group ((120±12), (59±14), (59±7) and (68±10), respectively))(P<0.01). Pearson's correlation test showed that the scores of BRIEF, such as GEC, BRI, MI, inhibition, emotional control, monitoring, initiation and working memory had significantly negative correlations with the score of ADQ, independence, self-control ((r=-0.313--0.741, P<0.05)). Also, GEC, inhibition, MI, initiation, working memory, plan, organization and monitoring had significantly negative correlations with the score of cognition ((r=-0.335--0.437, P<0.05)); Multiple stepwise linear regression analysis showed that BRI, inhibition and working memory were closely related with the social adaptive function of school-aged children with idiopathic or probably symptomatic epilepsy. School-aged children with idiopathic or probably symptomatic epilepsy may have significantly ecological executive function impairment and social adaptive function reduction. The aspects of BRI, inhibition and working memory in ecological executive function are significantly related with social adaptive function in school-aged children with epilepsy.

  12. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  13. Stellar-mass black holes and ultraluminous x-ray sources.

    PubMed

    Fender, Rob; Belloni, Tomaso

    2012-08-03

    We review the likely population, observational properties, and broad implications of stellar-mass black holes and ultraluminous x-ray sources. We focus on the clear empirical rules connecting accretion and outflow that have been established for stellar-mass black holes in binary systems in the past decade and a half. These patterns of behavior are probably the keys that will allow us to understand black hole feedback on the largest scales over cosmological time scales.

  14. Observation and mass measurement of the baryon Xib-.

    PubMed

    Aaltonen, T; Abulencia, A; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carrillo, S; Carlsmith, D; Carosi, R; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Cilijak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; DaRonco, S; Datta, M; D'Auria, S; Davies, T; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Delli Paoli, F; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Dörr, C; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garberson, F; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D; Giagu, S; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Group, R C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraan, A C; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marginean, R; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyamoto, A; Moed, S; Moggi, N; Mohr, B; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuno, S; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vazquez, F; Velev, G; Vellidis, C; Veramendi, G; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Vollrath, I; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, J; Wagner, W; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zhou, J; Zucchelli, S

    2007-08-03

    We report the observation and measurement of the mass of the bottom, strange baryon Xi(b)- through the decay chain Xi(b)- -->J/psiXi-, where J/psi-->mu+mu-, Xi- -->Lambdapi-, and Lambda-->ppi-. A signal is observed whose probability of arising from a background fluctuation is 6.6 x 10(-15), or 7.7 Gaussian standard deviations. The Xi(b)- mass is measured to be 5792.9+/-2.5(stat) +/- 1.7(syst) MeV/c2.

  15. β cell function and insulin resistance in lean cases with polycystic ovary syndrome.

    PubMed

    Pande, Arunkumar R; Guleria, Ashwani Kumar; Singh, Sudhanshu Dev; Shukla, Manoj; Dabadghao, Preeti

    2017-11-01

    Obesity is a major factor in development of insulin resistance (IR) and metabolic features in polycystic ovary syndrome (PCOS) patients. Nearly two-thirds patients with PCOS (30 of 37 confirmed cases of PCOS) in our previous community based study were lean, in contrast to Caucasians. Metabolic parameters including IR and β cell function have not been characterized well in this group of lean PCOS. To study the metabolic features including IR and β cell function in lean PCOS patients, 53 patients with BMI, <23 kg/m 2 were compared with 71 obese PCOS and 45 age and body mass index matched controls. Lean patients had similar β cell function and IR as compared to controls and obese patients, though the latter group had more metabolic abnormality. Fasting c-peptide and its ratio to glucose were significantly higher in lean patients compared to controls. In subset of subjects with five point OGTT, disposition index and Matsuda index (MI) showed significant negative correlation with BMI and blood pressure. MI also negatively correlated with waist, WHR, and HOMAB. High fasting C-peptide is probably a class effect as is seen in both lean and obese PCOS.

  16. Delay, probability, and social discounting in a public goods game.

    PubMed

    Jones, Bryan A; Rachlin, Howard

    2009-01-01

    A human social discount function measures the value to a person of a reward to another person at a given social distance. Just as delay discounting is a hyperbolic function of delay, and probability discounting is a hyperbolic function of odds-against, social discounting is a hyperbolic function of social distance. Experiment 1 obtained individual social, delay, and probability discount functions for a hypothetical $75 reward; participants also indicated how much of an initial $100 endowment they would contribute to a common investment in a public good. Steepness of discounting correlated, across participants, among all three discount dimensions. However, only social and probability discounting were correlated with the public-good contribution; high public-good contributors were more altruistic and also less risk averse than low contributors. Experiment 2 obtained social discount functions with hypothetical $75 rewards and delay discount functions with hypothetical $1,000 rewards, as well as public-good contributions. The results replicated those of Experiment 1; steepness of the two forms of discounting correlated with each other across participants but only social discounting correlated with the public-good contribution. Most participants in Experiment 2 predicted that the average contribution would be lower than their own contribution.

  17. Sub-barrier quasifission in heavy element formation reactions with deformed actinide target nuclei

    NASA Astrophysics Data System (ADS)

    Hinde, D. J.; Jeung, D. Y.; Prasad, E.; Wakhle, A.; Dasgupta, M.; Evers, M.; Luong, D. H.; du Rietz, R.; Simenel, C.; Simpson, E. C.; Williams, E.

    2018-02-01

    Background: The formation of superheavy elements (SHEs) by fusion of two massive nuclei is severely inhibited by the competing quasifission process. Low excitation energies favor SHE survival against fusion-fission competition. In "cold" fusion with spherical target nuclei near 208Pb, SHE yields are largest at beam energies significantly below the average capture barrier. In "hot" fusion with statically deformed actinide nuclei, this is not the case. Here the elongated deformation-aligned configurations in sub-barrier capture reactions inhibits fusion (formation of a compact compound nucleus), instead favoring rapid reseparation through quasifission. Purpose: To determine the probabilities of fast and slow quasifission in reactions with prolate statically deformed actinide nuclei, through measurement and quantitative analysis of the dependence of quasifission characteristics at beam energies spanning the average capture barrier energy. Methods: The Australian National University Heavy Ion Accelerator Facility and CUBE fission spectrometer have been used to measure fission and quasifission mass and angle distributions for reactions with projectiles from C to S, bombarding Th and U target nuclei. Results: Mass-asymmetric quasifission occurring on a fast time scale, associated with collisions with the tips of the prolate actinide nuclei, shows a rapid increase in probability with increasing projectile charge, the transition being centered around projectile atomic number ZP=14 . For mass-symmetric fission events, deviations of angular anisotropies from expectations for fusion fission, indicating a component of slower quasifission, suggest a similar transition, but centered around ZP˜8 . Conclusions: Collisions with the tips of statically deformed prolate actinide nuclei show evidence for two distinct quasifission processes of different time scales. Their probabilities both increase rapidly with the projectile charge. The probability of fusion can be severely suppressed by these two quasifission processes, since the sub-barrier heavy element yield is likely to be determined by the product of the probabilities of surviving each quasifission process.

  18. A Mass Census of the Nearby Universe with RESOLVE and ECO

    NASA Astrophysics Data System (ADS)

    Eckert, Kathleen D.; Kannappan, Sheila; Stark, David; Moffett, Amanda J.; Norris, Mark A.; Berlind, Andreas A.; Hall, Kirsten; Baker, Ashley; Snyder, Elaine M.; Bittner, Ashley; Hoversten, Erik A.; Lagos, Claudia; Nasipak, Zachary; RESOVE Team

    2017-01-01

    The low-mass slope of the galaxy stellar mass function is significantly shallower than that of the theoretical dark matter halo mass function, leading to several possible interpretations including: 1) stellar mass does not fully represent galaxy mass, 2) galaxy formation becomes increasingly inefficient in lower mass halos, and 3) environmental effects, such as stripping and merging, may change the mass function. To investigate these possible scenarios, we present the census of stellar, baryonic (stars + cold gas), and dynamical masses of galaxies and galaxy groups for the RESOLVE and ECO surveys. RESOLVE is a highly complete volume-limited survey of ~1500 galaxies, enabling direct measurement of galaxy mass functions without statistical completeness corrections down to baryonic mass Mb ~ 10^9 Msun. ECO provides a larger data set (~10,000 galaxies) complete down to Mb ~ 10^9.4 Msun. We show that the baryonic mass function has a steeper low-mass slope than the stellar mass function due to the large population of low-mass, gas-rich galaxies. The baryonic mass function’s low-mass slope, however, is still significantly shallower than that of the dark matter halo mass function. A more direct probe of total galaxy mass is its characteristic velocity, and we present RESOLVE’s preliminary galaxy velocity function, which combines ionized-gas rotation curves, stellar velocity dispersions, and estimates from scaling relations. The velocity function also diverges from the dark matter halo velocity function at low masses. To study the effect of environment, we break the mass functions into different group halo mass bins, finding complex substructure, including a depressed and flat low-mass slope for groups with halo masses ~10^11.4-12 Msun, which we refer to as the nascent group regime, with typical membership of 2-4 galaxies. This substructure is suggestive of efficient merging or gas stripping in nascent groups, which we find also have large scatter in their cold-baryon fractions, possibly pointing to diversity in hot halo gas content in this regime. This work is supported by NSF grant AST-0955368, the NC Space Grant Graduate Research Fellowship Program, and a UNC Royster Society Dissertation Completion Fellowship.

  19. Peritoneal mucinous cystadenocarcinoma of probable urachal origin: a challenging diagnosis

    PubMed Central

    Gore, D M; Bloch, S; Waller, W; Cohen, P

    2006-01-01

    This report describes the case of a mucinous cystadenocarcinoma of probable urachal origin that presented with mass effect, precipitating deep venous thrombosis and pulmonary embolism. The patient presented with acute symptoms of leg swelling, pain and dyspnoea, and a vague awareness of lower abdominal distension. Computer tomography showed a cystic mass closely related to the anterior abdominal wall and the superior aspect of the bladder. A 1500 cm3 cyst adherent to the dome of the urinary bladder was resected on laparotomy. Partial cystectomy was not carried out in the belief that the cyst represented a benign lesion. Subsequent imaging has shown cystic changes in the anterior bladder wall, and the patient has been referred for partial cystectomy. PMID:17021133

  20. Upbend and M1 scissors mode in neutron-rich nuclei - consequences for r-process $$(n,\\gamma )$$ reaction rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, A. C.; Goriely, S.; Bernstein, L. A.

    2015-01-01

    An enhanced probability for low-energy γ-emission ( upbend, Eγ < 3 MeV) at high excitation energies has been observed for several light and medium-mass nuclei close to the valley of stability. Also the M1 scissors mode seen in deformed nuclei increases the γ-decay probability for low-energy γ-rays (E γ ≈ 2–3 MeV). These phenomena, if present in neutron-rich nuclei, have the potential to increase radiative neutron-capture rates relevant for the r-process. Furthermore, the experimental and theoretical status of the upbend is discussed, and preliminary calculations of (n,γ) reaction rates for neutron-rich, mid-mass nuclei including the scissors mode are shown.

  1. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  2. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  3. Validation: Codes to compare simulation data to various observations

    NASA Astrophysics Data System (ADS)

    Cohn, J. D.

    2017-02-01

    Validation provides codes to compare several observations to simulated data with stellar mass and star formation rate, simulated data stellar mass function with observed stellar mass function from PRIMUS or SDSS-GALEX in several redshift bins from 0.01-1.0, and simulated data B band luminosity function with observed stellar mass function, and to create plots for various attributes, including stellar mass functions, and stellar mass to halo mass. These codes can model predictions (in some cases alongside observational data) to test other mock catalogs.

  4. [Studies of effects of aluminum oxide nanoparticles after intragastric administration].

    PubMed

    Shumakova, A A; Tananova, O N; Arianova, E A; Mal'tsev, G Iu; Trushina, É N; Mustafina, O K; Guseva, G V; Trusov, N V; Soto, S Kh; Sharanova, N É; Selifanov, A V; Gmoshinskiĭ, I V; Khotimchenko, S A

    2012-01-01

    Growing Wistar rats received intragastrically nanoparticles (NPs) of aluminum oxide (Al2O3) daily during 28 days at doses of 1 or 100 mg per kg body mass. There were studied body mass of animals, relative mass of internals, rate of protein macromolecules absorption in the gut, oxidative damage of DNA, pool of tissue thiols, activity of hepatic enzymes of xenobiotic detoxication system, biochemical and hematological blood indices, stability of lysosome membranes, condition of antioxidant defense system, apoptosis of hepatocytes. Conducted experiments didn't reveal any marked toxic action of Al2O3 NPs on rats after 28 days of administration both in high and low dose. Among effects probably related to NPs influence on animals there were lowering of relative liver and lung masses, decrease of hepatic thiol pool, activity of CYP1A1 isoform in liver and glutathione reductase in erythrocytes, increase of diene conjugates of fatty acids in blood plasma. Said shifts were small in magnitude, didn't come out of margins of physiological norm and didn't show any distinct relation to NPs dose. However considering great importance of this nanomaterial as probable environmental contaminant the studies of it's toxicity must be continued in conditions of low doses (less than 1 mg per kg body mass) for long period of time.

  5. Sources and geographical origins of fine aerosols in Paris (France)

    NASA Astrophysics Data System (ADS)

    Bressi, M.; Sciare, J.; Ghersi, V.; Mihalopoulos, N.; Petit, J.-E.; Nicolas, J. B.; Moukhtar, S.; Rosso, A.; Féron, A.; Bonnaire, N.; Poulakis, E.; Theodosi, C.

    2013-12-01

    The present study aims at identifying and apportioning the major sources of fine aerosols in Paris (France) - the second largest megacity in Europe -, and determining their geographical origins. It is based on the daily chemical composition of PM2.5 characterised during one year at an urban background site of Paris (Bressi et al., 2013). Positive Matrix Factorization (EPA PMF3.0) was used to identify and apportion the sources of fine aerosols; bootstrapping was performed to determine the adequate number of PMF factors, and statistics (root mean square error, coefficient of determination, etc.) were examined to better model PM2.5 mass and chemical components. Potential Source Contribution Function (PSCF) and Conditional Probability Function (CPF) allowed the geographical origins of the sources to be assessed; special attention was paid to implement suitable weighting functions. Seven factors named ammonium sulfate (A.S.) rich factor, ammonium nitrate (A.N.) rich factor, heavy oil combustion, road traffic, biomass burning, marine aerosols and metals industry were identified; a detailed discussion of their chemical characteristics is reported. They respectively contribute 27, 24, 17, 14, 12, 6 and 1% of PM2.5 mass (14.7 μg m-3) on the annual average; their seasonal variability is discussed. The A.S. and A.N. rich factors have undergone north-eastward mid- or long-range transport from Continental Europe, heavy oil combustion mainly stems from northern France and the English Channel, whereas road traffic and biomass burning are primarily locally emitted. Therefore, on average more than half of PM2.5 mass measured in the city of Paris is due to mid- or long-range transport of secondary aerosols stemming from continental Europe, whereas local sources only contribute a quarter of the annual averaged mass. These results imply that fine aerosols abatement policies conducted at the local scale may not be sufficient to notably reduce PM2.5 levels at urban background sites in Paris, suggesting instead more coordinated strategies amongst neighbouring countries. Similar conclusions might be drawn in other continental urban background sites given the transboundary nature of PM2.5 pollution.

  6. Sources and geographical origins of fine aerosols in Paris (France)

    NASA Astrophysics Data System (ADS)

    Bressi, M.; Sciare, J.; Ghersi, V.; Mihalopoulos, N.; Petit, J.-E.; Nicolas, J. B.; Moukhtar, S.; Rosso, A.; Féron, A.; Bonnaire, N.; Poulakis, E.; Theodosi, C.

    2014-08-01

    The present study aims at identifying and apportioning fine aerosols to their major sources in Paris (France) - the second most populated "larger urban zone" in Europe - and determining their geographical origins. It is based on the daily chemical composition of PM2.5 examined over 1 year at an urban background site of Paris (Bressi et al., 2013). Positive matrix factorization (EPA PMF3.0) was used to identify and apportion fine aerosols to their sources; bootstrapping was performed to determine the adequate number of PMF factors, and statistics (root mean square error, coefficient of determination, etc.) were examined to better model PM2.5 mass and chemical components. Potential source contribution function (PSCF) and conditional probability function (CPF) allowed the geographical origins of the sources to be assessed; special attention was paid to implement suitable weighting functions. Seven factors, namely ammonium sulfate (A.S.)-rich factor, ammonium nitrate (A.N.)-rich factor, heavy oil combustion, road traffic, biomass burning, marine aerosols and metal industry, were identified; a detailed discussion of their chemical characteristics is reported. They contribute 27, 24, 17, 14, 12, 6 and 1% of PM2.5 mass (14.7 μg m-3) respectively on the annual average; their seasonal variability is discussed. The A.S.- and A.N.-rich factors have undergone mid- or long-range transport from continental Europe; heavy oil combustion mainly stems from northern France and the English Channel, whereas road traffic and biomass burning are primarily locally emitted. Therefore, on average more than half of PM2.5 mass measured in the city of Paris is due to mid- or long-range transport of secondary aerosols stemming from continental Europe, whereas local sources only contribute a quarter of the annual averaged mass. These results imply that fine-aerosol abatement policies conducted at the local scale may not be sufficient to notably reduce PM2.5 levels at urban background sites in Paris, suggesting instead more coordinated strategies amongst neighbouring countries. Similar conclusions might be drawn in other continental urban background sites given the transboundary nature of PM2.5 pollution.

  7. Singular solution of the Feller diffusion equation via a spectral decomposition.

    PubMed

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  8. Singular solution of the Feller diffusion equation via a spectral decomposition

    NASA Astrophysics Data System (ADS)

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  9. Methodologies For A Physically Based Rockfall Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Agliardi, F.; Crosta, G. B.; Guzzetti, F.; Marian, M.

    Rockfall hazard assessment is an important land planning tool in alpine areas, where settlements progressively expand across rockfall prone areas, rising the vulnerability of the elements at risk, the worth of potential losses and the restoration costs. Nev- ertheless, hazard definition is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. In addition, the high mobility of rockfalls implies a more difficult hazard definition with respect to other slope insta- bilities for which runout is minimal. When coping with rockfalls, hazard assessment involves complex definitions for "occurrence probability" and "intensity". The local occurrence probability must derive from the combination of the triggering probability (related to the geomechanical susceptibility of rock masses to fail) and the transit or impact probability at a given location (related to the motion of falling blocks). The intensity (or magnitude) of a rockfall is a complex function of mass, velocity and fly height of involved blocks that can be defined in many different ways depending on the adopted physical description and "destructiveness" criterion. This work is an attempt to evaluate rockfall hazard using the results of numerical modelling performed by an original 3D rockfall simulation program. This is based on a kinematic algorithm and allows the spatially distributed simulation of rockfall motions on a three-dimensional topography described by a DTM. The code provides raster maps portraying the max- imum frequency of transit, velocity and height of blocks at each model cell, easily combined in a GIS in order to produce physically based rockfall hazard maps. The results of some three dimensional rockfall models, performed at both regional and lo- cal scale in areas where rockfall related problems are well known, have been used to assess rockfall hazard, by adopting an objective approach based on three-dimensional matrixes providing a positional "hazard index". Different hazard maps have been ob- tained combining and classifying variables in different ways. The performance of the different hazard maps has been evaluated on the basis of past rockfall events and com- pared to the results of existing methodologies. The sensitivity of the hazard index with respect to the included variables and their combinations is discussed in order to constrain as objective as possible assessment criteria.

  10. Low-luminosity stellar mass functions in globular clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richer, H.B.; Fahlman, G.G.; Buonanno, R.

    New data are presented on cluster luminosity functions and mass functions for selected fields in the globular clusters M13 and M71, extending down the main sequence to at least 0.2 solar mass. In this experiment, CCD photometry data were obtained at the prime focus of the CFHT on the cluster fields that were far from the cluster center. Luminosity functions were constructed, using the ADDSTAR routine to correct for the background, and mass functions were derived using the available models. The mass functions obtained for M13 and M71 were compared to existing data for NGC 6397. Results show that (1)more » all three globular clusters display a marked change in slope at about 0.4 solar mass, with the slopes becoming considerably steeper toward lower masses; (2) there is no correlation between the slope of the mass function and metallicity; and (3) the low-mass slope of the mass function for M13 is much steeper than for NGC 6397 and M71. 22 refs.« less

  11. Slicing cluster mass functions with a Bayesian razor

    NASA Astrophysics Data System (ADS)

    Sealfon, C. D.

    2010-08-01

    We apply a Bayesian ``razor" to forecast Bayes factors between different parameterizations of the galaxy cluster mass function. To demonstrate this approach, we calculate the minimum size N-body simulation needed for strong evidence favoring a two-parameter mass function over one-parameter mass functions and visa versa, as a function of the minimum cluster mass.

  12. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  13. [Determinants of pride and shame: outcome, expected success and attribution].

    PubMed

    Schützwohl, A

    1991-01-01

    In two experiments we investigated the relationship between subjective probability of success and pride and shame. According to Atkinson (1957), pride (the incentive of success) is an inverse linear function of the probability of success, shame (the incentive of failure) being a negative linear function. Attribution theory predicts an inverse U-shaped relationship between subjective probability of success and pride and shame. The results presented here are at variance with both theories: Pride and shame do not vary with subjective probability of success. However, pride and shame are systematically correlated with internal attributions of action outcome.

  14. A correlation between the heavy element content of transiting extrasolar planets and the metallicity of their parent stars

    NASA Astrophysics Data System (ADS)

    Guillot, T.; Santos, N. C.; Pont, F.; Iro, N.; Melo, C.; Ribas, I.

    2006-07-01

    Context.Nine extrasolar planets with masses between 110 and 430 M_⊕ are known to transit their star. The knowledge of their masses and radii allows an estimate of their composition, but uncertainties on equations of state, opacities and possible missing energy sources imply that only inaccurate constraints can be derived when considering each planet separately.Aims.We seek to better understand the composition of transiting extrasolar planets by considering them as an ensemble, and by comparing the obtained planetary properties to that of the parent stars.Methods.We use evolution models and constraints on the stellar ages to derive the mass of heavy elements present in the planets. Possible additional energy sources like tidal dissipation due to an inclined orbit or to downward kinetic energy transport are considered.Results.We show that the nine transiting planets discovered so far belong to a quite homogeneous ensemble that is characterized by a mass of heavy elements that is a relatively steep function of the stellar metallicity, from less than 20 earth masses of heavy elements around solar composition stars, to up to ~100 M_⊕ for three times the solar metallicity (the precise values being model-dependant). The correlation is still to be ascertained however. Statistical tests imply a worst-case 1/3 probability of a false positive.Conclusions.Together with the observed lack of giant planets in close orbits around metal-poor stars, these results appear to imply that heavy elements play a key role in the formation of close-in giant planets. The large masses of heavy elements inferred for planets orbiting metal rich stars was not anticipated by planet formation models and shows the need for alternative theories including migration and subsequent collection of planetesimals.

  15. Differential Predictors of Transient Stress versus Posttraumatic Stress Disorder: Evaluating Risk Following Targeted Mass Violence

    PubMed Central

    Miron, Lynsey R.; Orcutt, Holly K.; Kumpula, Mandy J.

    2014-01-01

    Schools have become a common incident site for targeted mass violence, including mass shootings. Although exposure to mass violence can result in significant distress, most individuals are able to fully recover over time, while a minority develop more pervasive pathology, such as PTSD. The present study investigated how several pre- and post-trauma factors predict posttraumatic stress symptoms (PTSS) in both the acute and distal aftermath of a campus mass shooting using a sample with known levels of pre-trauma functioning (N = 573). While the largest proportion of participants evidenced resilience following exposure to the event (46.1%), many reported high rates of PTSS shortly after the shooting (42.1%) and a smaller proportion (11.9%) met criteria for probable PTSD both in the acute and more distal aftermath of the event. While several pre-shooting factors predicted heightened PTSS after the shooting, prior trauma exposure was the only pre-shooting variable shown to significantly differentiate between those who experienced transient versus prolonged distress. Among post-shooting predictors, individuals reporting greater emotion dysregulation and peritraumatic dissociative experiences were over 4 times more likely to have elevated PTSS 8 months post-shooting compared to those reporting less dysregulation and dissociative experiences. Individuals with less exposure to the shooting and greater satisfaction with social support were more likely to recover from acute distress. Results suggest that, while pre-trauma factors may differentiate between those who are resilient in the aftermath of a mass shooting from those who experience heightened distress, several event-level and post-trauma coping factors help distinguish between those who eventually recover and those whose PTSD symptoms persist over time. PMID:25311288

  16. Differential predictors of transient stress versus posttraumatic stress disorder: evaluating risk following targeted mass violence.

    PubMed

    Miron, Lynsey R; Orcutt, Holly K; Kumpula, Mandy J

    2014-11-01

    Schools have become a common incident site for targeted mass violence, including mass shootings. Although exposure to mass violence can result in significant distress, most individuals are able to fully recover over time, while a minority develop more pervasive pathology, such as PTSD. The present study investigated how several pre- and posttrauma factors predict posttraumatic stress symptoms (PTSS) in both the acute and distal aftermath of a campus mass shooting using a sample with known levels of pretrauma functioning (N=573). Although the largest proportion of participants evidenced resilience following exposure to the event (46.1%), many reported high rates of PTSS shortly after the shooting (42.1%) and a smaller proportion (11.9%) met criteria for probable PTSD both in the acute and more distal aftermath of the event. While several preshooting factors predicted heightened PTSS after the shooting, prior trauma exposure was the only preshooting variable shown to significantly differentiate between those who experienced transient versus prolonged distress. Among postshooting predictors, individuals reporting greater emotion dysregulation and peritraumatic dissociative experiences were over four times more likely to have elevated PTSS 8months postshooting compared with those reporting less dysregulation and dissociative experiences. Individuals with less exposure to the shooting, fewer prior traumatic experiences, and greater satisfaction with social support were more likely to recover from acute distress. Overall, results suggest that, while pretrauma factors may differentiate between those who are resilient in the aftermath of a mass shooting and those who experience heightened distress, several event-level and posttrauma coping factors help distinguish between those who eventually recover and those whose PTSD symptoms persist over time. Copyright © 2014. Published by Elsevier Ltd.

  17. Cytosolic distributions of highly toxic metals Cd and Tl and several essential elements in the liver of brown trout (Salmo trutta L.) analyzed by size exclusion chromatography and inductively coupled plasma mass spectrometry.

    PubMed

    Dragun, Zrinka; Krasnići, Nesrete; Kolar, Nicol; Filipović Marijić, Vlatka; Ivanković, Dušica; Erk, Marijana

    2018-05-15

    Cytosolic distributions of nonessential metals Cd and Tl and seven essential elements among compounds of different molecular masses were studied in the liver of brown trout (Salmo trutta) from the karstic Krka River in Croatia. Analyses were done by size exclusion high performance liquid chromatography and high resolution inductively coupled plasma mass spectrometry. Common feature of Cd and Tl, as highly toxic elements, was their distribution within only two narrow peaks. The increase of cytosolic Cd concentrations was reflected in marked increase of Cd elution within low molecular mass peak (maximum at ∼15 kDa), presumably containing metallothioneins (MTs), which indicated successful Cd detoxification in brown trout liver under studied exposure conditions. Contrary, the increase of cytosolic Tl concentrations was reflected in marked increase of Tl elution within high molecular mass peak (maximum at 140 kDa), which probably indicated incomplete Tl detoxification. Common feature of the majority of studied essential elements was their distribution within more peaks, often broad and not well resolved, which is consistent with their numerous physiological functions. Among observed associations of essential metals/nonmetal to proteins, the following could be singled out: Cu and Zn association to MTs, Fe association to storage protein ferritin, and Se association to compounds of very low molecular masses (<5 kDa). The obtained results present the first step towards identification of metal-binding compounds in hepatic cytosol of brown trout, and thus a significant contribution to better understanding of metal fate in the liver of that important bioindicator species. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Source selection for cluster weak lensing measurements in the Hyper Suprime-Cam survey

    NASA Astrophysics Data System (ADS)

    Medezinski, Elinor; Oguri, Masamune; Nishizawa, Atsushi J.; Speagle, Joshua S.; Miyatake, Hironao; Umetsu, Keiichi; Leauthaud, Alexie; Murata, Ryoma; Mandelbaum, Rachel; Sifón, Cristóbal; Strauss, Michael A.; Huang, Song; Simet, Melanie; Okabe, Nobuhiro; Tanaka, Masayuki; Komiyama, Yutaka

    2018-03-01

    We present optimized source galaxy selection schemes for measuring cluster weak lensing (WL) mass profiles unaffected by cluster member dilution from the Subaru Hyper Suprime-Cam Strategic Survey Program (HSC-SSP). The ongoing HSC-SSP survey will uncover thousands of galaxy clusters to z ≲ 1.5. In deriving cluster masses via WL, a critical source of systematics is contamination and dilution of the lensing signal by cluster members, and by foreground galaxies whose photometric redshifts are biased. Using the first-year CAMIRA catalog of ˜900 clusters with richness larger than 20 found in ˜140 deg2 of HSC-SSP data, we devise and compare several source selection methods, including selection in color-color space (CC-cut), and selection of robust photometric redshifts by applying constraints on their cumulative probability distribution function (P-cut). We examine the dependence of the contamination on the chosen limits adopted for each method. Using the proper limits, these methods give mass profiles with minimal dilution in agreement with one another. We find that not adopting either the CC-cut or P-cut methods results in an underestimation of the total cluster mass (13% ± 4%) and the concentration of the profile (24% ± 11%). The level of cluster contamination can reach as high as ˜10% at R ≈ 0.24 Mpc/h for low-z clusters without cuts, while employing either the P-cut or CC-cut results in cluster contamination consistent with zero to within the 0.5% uncertainties. Our robust methods yield a ˜60 σ detection of the stacked CAMIRA surface mass density profile, with a mean mass of M200c = [1.67 ± 0.05(stat)] × 1014 M⊙/h.

  19. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  20. Use of quadrupole time-of-flight mass spectrometry to determine proposed structures of transformation products of the herbicide bromacil after water chlorination.

    PubMed

    Ibáñez, María; Sancho, Juan V; Pozo, Oscar J; Hernández, Félix

    2011-10-30

    The herbicide bromacil has been extensively used in the Spanish Mediterranean region, and although plant protection products containing bromacil have been withdrawn by the European Union, this compound is still frequently detected in surface and ground water of this area. However, the fast and complete disappearance of this compound has been observed in water intended for human consumption, after it has been subjected to chlorination. There is a concern about the possible degradation products formed, since they might be present in drinking water and might be hazardous. In this work, the sensitive full-spectrum acquisition, high resolution and exact mass capabilities of hybrid quadrupole time-of-flight (QTOF) mass spectrometry have allowed the discovery and proposal of structures of transformation products (TPs) of bromacil in water subjected to chlorination. Different ground water samples spiked at 0.5 µg/mL were subjected to the conventional chlorination procedure applied to drinking waters, sampling 2-mL aliquots at different time intervals (1, 10 and 30 min). The corresponding non-spiked water was used as control sample in each experiment. Afterwards, 50 μL of the water was directly injected into an ultra-high-pressure liquid chromatography (UHPLC)/electrospray ionization (ESI)-(Q)TOF system. The QTOF instrument enabled the simultaneous recording of two acquisition functions at different collision energies (MS(E) approach): the low-energy (LE) function, fixed at 4 eV, and the high-energy (HE) function, with a collision energy ramp from 15 to 40 eV. This approach enables the simultaneous acquisition of both parent (deprotonated and protonated molecules) and fragment ions in a single injection. The low mass errors observed for the deprotonated and protonated molecules (detected in LE function) allowed the assignment of a highly probable molecular formula. Fragment ions and neutral losses were investigated in both LE and HE spectra to elucidate the structures of the TPs found. For those compounds that displayed poor fragmentation, product ion scan (MS/MS) experiments were also performed. On processing the data with specialized software (MetaboLynx), four bromacil TPs were detected and their structures were elucidated. To our knowledge, two of them had not previously been reported. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Changes in taste and smell function, dietary intake, food preference, and body composition in testicular cancer patients treated with cisplatin-based chemotherapy.

    PubMed

    IJpma, Irene; Renken, Remco J; Gietema, Jourik A; Slart, Riemer H J A; Mensink, Manon G J; Lefrandt, Joop D; Ter Horst, Gert J; Reyners, Anna K L

    2017-12-01

    Taste and smell changes due to chemotherapy may contribute to the high prevalence of overweight in testicular cancer patients (TCPs). This study investigates the taste and smell function, dietary intake, food preference, and body composition in TCPs before, during, and up to 1 year after cisplatin-based chemotherapy. Twenty-one consecutive TCPs participated. At baseline TCPs were compared to healthy controls (N = 48). Taste strips and 'Sniffin' Sticks' were used to determine psychophysical taste and smell function. Subjective taste, smell, appetite, and hunger were assessed using a questionnaire. Dietary intake was analyzed using a food frequency questionnaire. Food preference was assessed using food pictures varying in taste (sweet/savoury) and fat or protein content. A Dual-Energy X-ray Absorptiometry (DEXA) scan was performed to measure whole body composition. Compared to controls, TCPs had a lower smell threshold (P = 0.045) and lower preference for high fat sweet foods at baseline (P = 0.024). Over time, intra-individual psychophysical taste and smell function was highly variable. The salty taste threshold increased at completion of chemotherapy compared to baseline (P = 0.006). A transient decrease of subjective taste, appetite, and hunger feelings was observed per chemotherapy cycle. The percentage of fat mass increased during chemotherapy compared to baseline, while the lean mass and bone density decreased (P < 0.05). Coping strategies regarding subjective taste impairment should especially be provided during the first week of each chemotherapy cycle. Since the body composition of TCPs already had changed at completion of chemotherapy, intervention strategies to limit the impact of cardiovascular risk factors should probably start during treatment. Copyright © 2016 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  2. The density structure and star formation rate of non-isothermal polytropic turbulence

    NASA Astrophysics Data System (ADS)

    Federrath, Christoph; Banerjee, Supratik

    2015-04-01

    The interstellar medium of galaxies is governed by supersonic turbulence, which likely controls the star formation rate (SFR) and the initial mass function (IMF). Interstellar turbulence is non-universal, with a wide range of Mach numbers, magnetic fields strengths and driving mechanisms. Although some of these parameters were explored, most previous works assumed that the gas is isothermal. However, we know that cold molecular clouds form out of the warm atomic medium, with the gas passing through chemical and thermodynamic phases that are not isothermal. Here we determine the role of temperature variations by modelling non-isothermal turbulence with a polytropic equation of state (EOS), where pressure and temperature are functions of gas density, P˜ ρ ^Γ, T ˜ ρΓ - 1. We use grid resolutions of 20483 cells and compare polytropic exponents Γ = 0.7 (soft EOS), Γ = 1 (isothermal EOS) and Γ = 5/3 (stiff EOS). We find a complex network of non-isothermal filaments with more small-scale fragmentation occurring for Γ < 1, while Γ > 1 smoothes out density contrasts. The density probability distribution function (PDF) is significantly affected by temperature variations, with a power-law tail developing at low densities for Γ > 1. In contrast, the PDF becomes closer to a lognormal distribution for Γ ≲ 1. We derive and test a new density variance-Mach number relation that takes Γ into account. This new relation is relevant for theoretical models of the SFR and IMF, because it determines the dense gas mass fraction of a cloud, from which stars form. We derive the SFR as a function of Γ and find that it decreases by a factor of ˜5 from Γ = 0.7 to 5/3.

  3. Constraining the optical depth of galaxies and velocity bias with cross-correlation between the kinetic Sunyaev-Zeldovich effect and the peculiar velocity field

    NASA Astrophysics Data System (ADS)

    Ma, Yin-Zhe; Gong, Guo-Dong; Sui, Ning; He, Ping

    2018-03-01

    We calculate the cross-correlation function < (Δ T/T)({v}\\cdot \\hat{n}/σ _v) > between the kinetic Sunyaev-Zeldovich (kSZ) effect and the reconstructed peculiar velocity field using linear perturbation theory, with the aim of constraining the optical depth τ and peculiar velocity bias of central galaxies with Planck data. We vary the optical depth τ and the velocity bias function bv(k) = 1 + b(k/k0)n, and fit the model to the data, with and without varying the calibration parameter y0 that controls the vertical shift of the correlation function. By constructing a likelihood function and constraining the τ, b and n parameters, we find that the quadratic power-law model of velocity bias, bv(k) = 1 + b(k/k0)2, provides the best fit to the data. The best-fit values are τ = (1.18 ± 0.24) × 10-4, b=-0.84^{+0.16}_{-0.20} and y0=(12.39^{+3.65}_{-3.66})× 10^{-9} (68 per cent confidence level). The probability of b > 0 is only 3.12 × 10-8 for the parameter b, which clearly suggests a detection of scale-dependent velocity bias. The fitting results indicate that the large-scale (k ≤ 0.1 h Mpc-1) velocity bias is unity, while on small scales the bias tends to become negative. The value of τ is consistent with the stellar mass-halo mass and optical depth relationship proposed in the literature, and the negative velocity bias on small scales is consistent with the peak background split theory. Our method provides a direct tool for studying the gaseous and kinematic properties of galaxies.

  4. A Quantitative Tool to Distinguish Isobaric Leucine and Isoleucine Residues for Mass Spectrometry-Based De Novo Monoclonal Antibody Sequencing

    NASA Astrophysics Data System (ADS)

    Poston, Chloe N.; Higgs, Richard E.; You, Jinsam; Gelfanova, Valentina; Hale, John E.; Knierman, Michael D.; Siegel, Robert; Gutierrez, Jesus A.

    2014-07-01

    De novo sequencing by mass spectrometry (MS) allows for the determination of the complete amino acid (AA) sequence of a given protein based on the mass difference of detected ions from MS/MS fragmentation spectra. The technique relies on obtaining specific masses that can be attributed to characteristic theoretical masses of AAs. A major limitation of de novo sequencing by MS is the inability to distinguish between the isobaric residues leucine (Leu) and isoleucine (Ile). Incorrect identification of Ile as Leu or vice versa often results in loss of activity in recombinant antibodies. This functional ambiguity is commonly resolved with costly and time-consuming AA mutation and peptide sequencing experiments. Here, we describe a set of orthogonal biochemical protocols, which experimentally determine the identity of Ile or Leu residues in monoclonal antibodies (mAb) based on the selectivity that leucine aminopeptidase shows for n-terminal Leu residues and the cleavage preference for Leu by chymotrypsin. The resulting observations are combined with germline frequencies and incorporated into a logistic regression model, called Predictor for Xle Sites (PXleS) to provide a statistical likelihood for the identity of Leu at an ambiguous site. We demonstrate that PXleS can generate a probability for an Xle site in mAbs with 96% accuracy. The implementation of PXleS precludes the expression of several possible sequences and, therefore, reduces the overall time and resources required to go from spectra generation to a biologically active sequence for a mAb when an Ile or Leu residue is in question.

  5. Properties of Starless Clumps through Protoclusters from the Bolocam Galactic Plane Survey

    NASA Astrophysics Data System (ADS)

    Svoboda, Brian E.; Shirley, Yancy

    2014-07-01

    High mass stars play a key role in the physical and chemical evolution of the interstellar medium, yet the evolution of physical properties for high-mass star-forming regions remains unclear. We sort a sample of ~4668 molecular cloud clumps from the Bolocam Galactic Plane Survey (BGPS) into different evolutionary stages by combining the BGPS 1.1 mm continuum and observational diagnostics of star-formation activity from a variety of Galactic plane surveys: 70 um compact sources, mid-IR color-selected YSOs, H2O and CH3OH masers, EGOs, and UCHII regions. We apply Monte Carlo techniques to distance probability distribution functions (DPDFs) in order to marginalize over the kinematic distance ambiguity and calculate distributions for derived quantities of clumps in different evolutionary stages. We also present a combined NH3 and H2O maser catalog for ~1590 clumps from the literature and our own GBT 100m observations. We identify a sub-sample of 440 dense clumps with no star-formation indicators, representing the largest and most robust sample of pre-protocluster candidates from a blind survey to date. Distributions of I(HCO+), I(N2H+), dv(HCO+), dv(N2H+), mass surface density, and kinetic temperature show strong progressions when separated by evolutionary stage. No progressions are found in size or dust mass; however, weak progressions are observed in area > 2 pc^2 and dust mass > 3 10^3 Msun. An observed breakdown occurs in the size-linewidth relationship and we find no improvement when sampling by evolutionary stage.

  6. Global Modeling of Nebulae with Particle Growth, Drift, and Evaporation Fronts. I. Methodology and Typical Results

    NASA Astrophysics Data System (ADS)

    Estrada, Paul R.; Cuzzi, Jeffrey N.; Morgan, Demitri A.

    2016-02-01

    We model particle growth in a turbulent, viscously evolving protoplanetary nebula, incorporating sticking, bouncing, fragmentation, and mass transfer at high speeds. We treat small particles using a moments method and large particles using a traditional histogram binning, including a probability distribution function of collisional velocities. The fragmentation strength of the particles depends on their composition (icy aggregates are stronger than silicate aggregates). The particle opacity, which controls the nebula thermal structure, evolves as particles grow and mass redistributes. While growing, particles drift radially due to nebula headwind drag. Particles of different compositions evaporate at “evaporation fronts” (EFs) where the midplane temperature exceeds their respective evaporation temperatures. We track the vapor and solid phases of each component, accounting for advection and radial and vertical diffusion. We present characteristic results in evolutions lasting 2 × 105 years. In general, (1) mass is transferred from the outer to the inner nebula in significant amounts, creating radial concentrations of solids at EFs; (2) particle sizes are limited by a combination of fragmentation, bouncing, and drift; (3) “lucky” large particles never represent a significant amount of mass; and (4) restricted radial zones just outside each EF become compositionally enriched in the associated volatiles. We point out implications for millimeter to submillimeter SEDs and the inference of nebula mass, radial banding, the role of opacity on new mechanisms for generating turbulence, the enrichment of meteorites in heavy oxygen isotopes, variable and nonsolar redox conditions, the primary accretion of silicate and icy planetesimals, and the makeup of Jupiter’s core.

  7. A quantitative tool to distinguish isobaric leucine and isoleucine residues for mass spectrometry-based de novo monoclonal antibody sequencing.

    PubMed

    Poston, Chloe N; Higgs, Richard E; You, Jinsam; Gelfanova, Valentina; Hale, John E; Knierman, Michael D; Siegel, Robert; Gutierrez, Jesus A

    2014-07-01

    De novo sequencing by mass spectrometry (MS) allows for the determination of the complete amino acid (AA) sequence of a given protein based on the mass difference of detected ions from MS/MS fragmentation spectra. The technique relies on obtaining specific masses that can be attributed to characteristic theoretical masses of AAs. A major limitation of de novo sequencing by MS is the inability to distinguish between the isobaric residues leucine (Leu) and isoleucine (Ile). Incorrect identification of Ile as Leu or vice versa often results in loss of activity in recombinant antibodies. This functional ambiguity is commonly resolved with costly and time-consuming AA mutation and peptide sequencing experiments. Here, we describe a set of orthogonal biochemical protocols, which experimentally determine the identity of Ile or Leu residues in monoclonal antibodies (mAb) based on the selectivity that leucine aminopeptidase shows for n-terminal Leu residues and the cleavage preference for Leu by chymotrypsin. The resulting observations are combined with germline frequencies and incorporated into a logistic regression model, called Predictor for Xle Sites (PXleS) to provide a statistical likelihood for the identity of Leu at an ambiguous site. We demonstrate that PXleS can generate a probability for an Xle site in mAbs with 96% accuracy. The implementation of PXleS precludes the expression of several possible sequences and, therefore, reduces the overall time and resources required to go from spectra generation to a biologically active sequence for a mAb when an Ile or Leu residue is in question.

  8. Weak lensing shear and aperture mass from linear to non-linear scales

    NASA Astrophysics Data System (ADS)

    Munshi, Dipak; Valageas, Patrick; Barber, Andrew J.

    2004-05-01

    We describe the predictions for the smoothed weak lensing shear, γs, and aperture mass,Map, of two simple analytical models of the density field: the minimal tree model and the stellar model. Both models give identical results for the statistics of the three-dimensional density contrast smoothed over spherical cells and only differ by the detailed angular dependence of the many-body density correlations. We have shown in previous work that they also yield almost identical results for the probability distribution function (PDF) of the smoothed convergence, κs. We find that the two models give rather close results for both the shear and the positive tail of the aperture mass. However, we note that at small angular scales (θs<~ 2 arcmin) the tail of the PDF, , for negative Map shows a strong variation between the two models, and the stellar model actually breaks down for θs<~ 0.4 arcmin and Map < 0. This shows that the statistics of the aperture mass provides a very precise probe of the detailed structure of the density field, as it is sensitive to both the amplitude and the detailed angular behaviour of the many-body correlations. On the other hand, the minimal tree model shows good agreement with numerical simulations over all the scales and redshifts of interest, while both models provide a good description of the PDF, , of the smoothed shear components. Therefore, the shear and the aperture mass provide robust and complementary tools to measure the cosmological parameters as well as the detailed statistical properties of the density field.

  9. Nesting environment may drive variation in eggshell structure and egg characteristics in the Testudinata.

    PubMed

    Deeming, D Charles

    2018-05-14

    Testudines exhibit considerable variation in the degree of eggshell calcification, which affects eggshell conductance, water physiology of the embryos, and calcium metabolism of embryos. However, the underlying reason for different shell types has not been explored. Phylogenetically controlled analyses examined relationships between egg size, shell mass, and clutch size in ∼200 turtle species from a range of body sizes and assigned by family as laying either rigid- or pliable-shelled eggs. Shell type affected egg breadth relative to pelvic dimensions, egg mass, and relative shell mass but did not affect size, mass, or total shell mass of the clutch. These results suggest that calcium availability may be a function of body size and the type of shell may reflect in part the interplay between clutch size and egg size. It was further concluded that the eggshell probably evolved as a means of physical protection. Differences in shell calcification may not primarily reflect reproductive parameters but rather correlate with the acidity of a species' nesting environment. Low pH environments may have thicker calcareous layer to counteract the erosion caused by the soil and maintain the integrity of the physical barrier. Limited calcium availability may constrain clutch size. More neutral nesting substrates expose eggshells to less erosion so calcification per egg can be reduced and this allows larger clutch sizes. This pattern is also reflected in thick, calcified crocodilian eggs. Further research is needed to test whether eggshell calcification in the testudines correlates with nest pH in order to verify this relationship. © 2018 Wiley Periodicals, Inc.

  10. Meteoric Magnesium Ions in the Martian Atmosphere

    NASA Technical Reports Server (NTRS)

    Pesnell, William Dean; Grebowsky, Joseph

    1999-01-01

    From a thorough modeling of the altitude profile of meteoritic ionization in the Martian atmosphere we deduce that a persistent layer of magnesium ions should exist around an altitude of 70 km. Based on current estimates of the meteoroid mass flux density, a peak ion density of about 10(exp 4) ions/cm is predicted. Allowing for the uncertainties in all of the model parameters, this value is probably within an order of magnitude of the correct density. Of these parameters, the peak density is most sensitive to the meteoroid mass flux density which directly determines the ablated line density into a source function for Mg. Unlike the terrestrial case, where the metallic ion production is dominated by charge-exchange of the deposited neutral Mg with the ambient ions, Mg+ in the Martian atmosphere is produced predominantly by photoionization. The low ultraviolet absorption of the Martian atmosphere makes Mars an excellent laboratory in which to study meteoric ablation. Resonance lines not seen in the spectra of terrestrial meteors may be visible to a surface observatory in the Martian highlands.

  11. Tip-growing cells of the moss Ceratodon purpureus Are gravitropic in high-density media

    NASA Technical Reports Server (NTRS)

    Schwuchow, Jochen Michael; Kern, Volker Dieter; Sack, Fred David

    2002-01-01

    Gravity sensing in plants and algae is hypothesized to rely upon either the mass of the entire cell or that of sedimenting organelles (statoliths). Protonemata of the moss Ceratodon purpureus show upward gravitropism and contain amyloplasts that sediment. If moss sensing were whole-cell based, then media denser than the cell should prevent gravitropism or reverse its direction. Cells that were inverted or reoriented to the horizontal displayed distinct negative gravitropism in solutions of iodixanol with densities of 1.052 to 1.320 as well as in bovine serum albumin solutions with densities of 1.037 to 1.184 g cm(-3). Studies using tagged molecules of different sizes and calculations of diffusion times suggest that both types of media penetrate through the apical cell wall. Estimates of the density of the apical cell range from 1.004 to 1.085. Because protonemata grow upward when the cells have a density that is lower than the surrounding medium, gravitropic sensing probably utilizes an intracellular mass in moss protonemata. These data provide additional support for the idea that sedimenting amyloplasts function as statoliths in gravitropism.

  12. Density profiles of supernova matter and determination of neutrino parameters

    NASA Astrophysics Data System (ADS)

    Chiu, Shao-Hsuan

    2007-08-01

    The flavor conversion of supernova neutrinos can lead to observable signatures related to the unknown neutrino parameters. As one of the determinants in dictating the efficiency of resonant flavor conversion, the local density profile near the Mikheyev-Smirnov-Wolfenstein (MSW) resonance in a supernova environment is, however, not so well understood. In this analysis, variable power-law functions are adopted to represent the independent local density profiles near the locations of resonance. It is shown that the uncertain matter density profile in a supernova, the possible neutrino mass hierarchies, and the undetermined 1-3 mixing angle would result in six distinct scenarios in terms of the survival probabilities of νe and ν¯e. The feasibility of probing the undetermined neutrino mass hierarchy and the 1-3 mixing angle with the supernova neutrinos is then examined using several proposed experimental observables. Given the incomplete knowledge of the supernova matter profile, the analysis is further expanded to incorporate the Earth matter effect. The possible impact due to the choice of models, which differ in the average energy and in the luminosity of neutrinos, is also addressed in the analysis.

  13. Rate and reaction probability of the surface reaction between ozone and dihydromyrcenol measured in a bench scale reactor and a room-sized chamber

    NASA Astrophysics Data System (ADS)

    Shu, Shi; Morrison, Glenn C.

    2012-02-01

    Low volatility terpenoids emitted from consumer products can react with ozone on surfaces and may significantly alter concentrations of ozone, terpenoids and reaction products in indoor air. We measured the reaction probability and a second-order surface-specific reaction rate for the ozonation of dihydromyrcenol, a representative indoor terpenoid, adsorbed onto polyvinylchloride (PVC), glass, and latex paint coated spheres. The reaction probability ranged from (0.06-8.97) × 10 -5 and was very sensitive to humidity, substrate and mass adsorbed. The average surface reaction probability is about 10 times greater than that for the gas-phase reaction. The second-order surface-specific rate coefficient ranged from (0.32-7.05) × 10 -15 cm 4 s -1 molecule -1and was much less sensitive to humidity, substrate, or mass adsorbed. We also measured the ozone deposition velocity due to adsorbed dihydromyrcenol on painted drywall in a room-sized chamber, Based on that, we calculated the rate coefficient ((0.42-1.6) × 10 -15 cm 4 molecule -1 s -1), which was consistent with that derived from bench-scale experiments for the latex paint under similar conditions. We predict that more than 95% of dihydromyrcenol oxidation takes place on indoor surfaces, rather than in building air.

  14. Demons and superconductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ihm, J.; Cohen, M.L.; Tuan, S.F.

    1981-04-01

    Model calculations are used to explore the role of demons (acoustic plasmons involving light and heavy mass carriers) in superconductivity. Heavy d electrons and light s and p electrons in a transition metal are used for discussion, but the calculation presented is more general, and the results can be applied to other systems. The analysis is based on the dielectric-function approach and the Bardeen-Cooper-Schrieffer theory. The dielectric function includes intraband and interband s-d scattering, and a tight-binding model is used to examine the role of s-d hybridization. The demon contribution generally reduces the Coulomb interaction between the electrons. Under suitablemore » conditions, the model calculations indicate that the electron-electron interaction via demons can be attractive, but the results also suggest that this mechanism is probably not dominant in transition metals and transition-metal compounds. An attractive interband contribution is found, and it is proposed that this effect may lead to pairing in suitable systems.« less

  15. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  16. Surface Connectivity and Interocean Exchanges From Drifter-Based Transition Matrices

    NASA Astrophysics Data System (ADS)

    McAdam, Ronan; van Sebille, Erik

    2018-01-01

    Global surface transport in the ocean can be represented by using the observed trajectories of drifters to calculate probability distribution functions. The oceanographic applications of the Markov Chain approach to modeling include tracking of floating debris and water masses, globally and on yearly-to-centennial time scales. Here we analyze the error inherent with mapping trajectories onto a grid and the consequences for ocean transport modeling and detection of accumulation structures. A sensitivity analysis of Markov Chain parameters is performed in an idealized Stommel gyre and western boundary current as well as with observed ocean drifters, complementing previous studies on widespread floating debris accumulation. Focusing on two key areas of interocean exchange—the Agulhas system and the North Atlantic intergyre transport barrier—we assess the capacity of the Markov Chain methodology to detect surface connectivity and dynamic transport barriers. Finally, we extend the methodology's functionality to separate the geostrophic and nongeostrophic contributions to interocean exchange in these key regions.

  17. Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis

    2013-06-01

    We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less

  18. Proteome map of Aspergillus nidulans during osmoadaptation.

    PubMed

    Kim, Yonghyun; Nandakumar, M P; Marten, Mark R

    2007-09-01

    The model filamentous fungus Aspergillus nidulans, when grown in a moderate level of osmolyte (+0.6M KCl), was previously found to have a significantly reduced cell wall elasticity (Biotech Prog, 21:292, 2005). In this study, comparative proteomic analysis via two-dimensional gel electrophoresis (2de) and matrix-assisted laser desorption ionization/time-of-flight (MALDI-TOF) mass spectrometry was used to assess molecular level events associated with this phenomenon. Thirty of 90 differentially expressed proteins were identified. Sequence homology and conserved domains were used to assign probable function to twenty-one proteins currently annotated as "hypothetical." In osmoadapted cells, there was an increased expression of glyceraldehyde-3-phosphate dehydrogenase and aldehyde dehydrogenase, as well as a decreased expression of enolase, suggesting an increased glycerol biosynthesis and decreased use of the TCA cycle. There also was an increased expression of heat shock proteins and Shp1-like protein degradation protein, implicating increased protein turnover. Five novel osmoadaptation proteins of unknown functions were also identified.

  19. Electronic structures of [001]- and [111]-oriented InSb and GaSb free-standing nanowires

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Gaohua; Department of Applied Physics and Key Laboratory for Micro-Nano Physics and Technology of Hunan Province, Hunan University, Changsha 410082; Luo, Ning

    We report on a theoretical study of the electronic structures of InSb and GaSb nanowires oriented along the [001] and [111] crystallographic directions. The nanowires are described by atomistic, tight-binding models, including spin-orbit interaction. The band structures and the wave functions of the nanowires are calculated by means of a Lanczos iteration algorithm. For the [001]-oriented InSb and GaSb nanowires, the systems with both square and rectangular cross sections are considered. Here, it is found that all the energy bands are doubly degenerate. Although the lowest conduction bands in these nanowires show good parabolic dispersions, the top valence bands showmore » rich and complex structures. In particular, the topmost valence bands of the nanowires with a square cross section show a double maximum structure. In the nanowires with a rectangular cross section, this double maximum structure is suppressed, and the top valence bands gradually develop into parabolic bands as the aspect ratio of the cross section is increased. For the [111]-oriented InSb and GaSb nanowires, the systems with hexagonal cross sections are considered. It is found that all the bands at the Γ-point are again doubly degenerate. However, some of them will split into non-degenerate bands when the wave vector moves away from the Γ-point. Although the lowest conduction bands again show good parabolic dispersions, the topmost valence bands do not show the double maximum structure. Instead, they show a single maximum structure with its maximum at a wave vector slightly away from the Γ-point. The wave functions of the band states near the band gaps of the [001]- and [111]-oriented InSb and GaSb nanowires are also calculated and are presented in terms of probability distributions in the cross sections. It is found that although the probability distributions of the band states in the [001]-oriented nanowires with a rectangular cross section could be qualitatively described by one-band effective mass theory, the probability distributions of the band states in the [001]-oriented nanowires with a square cross section and the [111]-oriented nanowires with a hexagonal cross section show characteristic patterns with symmetries closely related to the irreducible representations of the relevant double point groups and, in general, go beyond the prediction of a simple one-band effective mass theory. We also investigate the effects of quantum confinement on the band structures of the [001]- and [111]-oriented InSb and GaSb nanowires and present an empirical formula for the description of quantization energies of the band edge states in the nanowires, which could be used to estimate the enhancement of the band gaps of the nanowires as a result of quantum confinement. The size dependencies of the electron and hole effective masses in these nanowires are also investigated and discussed.« less

  20. Electronic structures of [001]- and [111]-oriented InSb and GaSb free-standing nanowires

    NASA Astrophysics Data System (ADS)

    Liao, Gaohua; Luo, Ning; Yang, Zhihu; Chen, Keqiu; Xu, H. Q.

    2015-09-01

    We report on a theoretical study of the electronic structures of InSb and GaSb nanowires oriented along the [001] and [111] crystallographic directions. The nanowires are described by atomistic, tight-binding models, including spin-orbit interaction. The band structures and the wave functions of the nanowires are calculated by means of a Lanczos iteration algorithm. For the [001]-oriented InSb and GaSb nanowires, the systems with both square and rectangular cross sections are considered. Here, it is found that all the energy bands are doubly degenerate. Although the lowest conduction bands in these nanowires show good parabolic dispersions, the top valence bands show rich and complex structures. In particular, the topmost valence bands of the nanowires with a square cross section show a double maximum structure. In the nanowires with a rectangular cross section, this double maximum structure is suppressed, and the top valence bands gradually develop into parabolic bands as the aspect ratio of the cross section is increased. For the [111]-oriented InSb and GaSb nanowires, the systems with hexagonal cross sections are considered. It is found that all the bands at the Γ-point are again doubly degenerate. However, some of them will split into non-degenerate bands when the wave vector moves away from the Γ-point. Although the lowest conduction bands again show good parabolic dispersions, the topmost valence bands do not show the double maximum structure. Instead, they show a single maximum structure with its maximum at a wave vector slightly away from the Γ-point. The wave functions of the band states near the band gaps of the [001]- and [111]-oriented InSb and GaSb nanowires are also calculated and are presented in terms of probability distributions in the cross sections. It is found that although the probability distributions of the band states in the [001]-oriented nanowires with a rectangular cross section could be qualitatively described by one-band effective mass theory, the probability distributions of the band states in the [001]-oriented nanowires with a square cross section and the [111]-oriented nanowires with a hexagonal cross section show characteristic patterns with symmetries closely related to the irreducible representations of the relevant double point groups and, in general, go beyond the prediction of a simple one-band effective mass theory. We also investigate the effects of quantum confinement on the band structures of the [001]- and [111]-oriented InSb and GaSb nanowires and present an empirical formula for the description of quantization energies of the band edge states in the nanowires, which could be used to estimate the enhancement of the band gaps of the nanowires as a result of quantum confinement. The size dependencies of the electron and hole effective masses in these nanowires are also investigated and discussed.

Top