Sample records for strong cosmic censor

  1. Exotica and the status of the strong cosmic censor conjecture in four dimensions

    NASA Astrophysics Data System (ADS)

    Etesi, Gábor

    2017-12-01

    An immense class of physical counterexamples to the four dimensional strong cosmic censor conjecture—in its usual broad formulation—is exhibited. More precisely, out of any closed and simply connected 4-manifold an open Ricci-flat Lorentzian 4-manifold is constructed which is not globally hyperbolic, and no perturbation of which, in any sense, can be globally hyperbolic. This very stable non-global-hyperbolicity is the consequence of our open spaces having a ‘creased end’—i.e. an end diffeomorphic to an exotic \

  2. Limitation of Inverse Probability-of-Censoring Weights in Estimating Survival in the Presence of Strong Selection Bias

    PubMed Central

    Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro

    2011-01-01

    In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029

  3. On cosmic censor in high-energy particle collisions

    NASA Astrophysics Data System (ADS)

    Miyamoto, Umpei

    2011-09-01

    In the context of large extra-dimension or TeV-scale gravity scenarios, miniature black holes might be produced in collider experiments. In many works the validity of the cosmic censorship hypothesis has been assumed, which means that there is no chance to observe trans-Planckian phenomena in the experiments since such phenomena are veiled behind the horizons. Here, we argue that "visible borders of spacetime" (as effective naked singularities) would be produced, even dominantly over the black holes, in the collider experiments. Such phenomena will provide us an arena of quantum gravity.

  4. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets. II. Group comparisons

    USGS Publications Warehouse

    Antweiler, Ronald C.

    2015-01-01

    The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.

  5. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    PubMed

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  6. Collapse of a self-similar cylindrical scalar field with non-minimal coupling II: strong cosmic censorship

    NASA Astrophysics Data System (ADS)

    Condron, Eoin; Nolan, Brien C.

    2014-08-01

    We investigate self-similar scalar field solutions to the Einstein equations in whole cylinder symmetry. Imposing self-similarity on the spacetime gives rise to a set of single variable functions describing the metric. Furthermore, it is shown that the scalar field is dependent on a single unknown function of the same variable and that the scalar field potential has exponential form. The Einstein equations then take the form of a set of ODEs. Self-similarity also gives rise to a singularity at the scaling origin. We extend the work of Condron and Nolan (2014 Class. Quantum Grav. 31 015015), which determined the global structure of all solutions with a regular axis in the causal past of the singularity. We identified a class of solutions that evolves through the past null cone of the singularity. We give the global structure of these solutions and show that the singularity is censored in all cases.

  7. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  8. seawaveQ: an R package providing a model and utilities for analyzing trends in chemical concentrations in streams with a seasonal wave (seawave) and adjustment for streamflow (Q) and other ancillary variables

    USGS Publications Warehouse

    Ryberg, Karen R.; Vecchia, Aldo V.

    2013-01-01

    The seawaveQ R package fits a parametric regression model (seawaveQ) to pesticide concentration data from streamwater samples to assess variability and trends. The model incorporates the strong seasonality and high degree of censoring common in pesticide data and users can incorporate numerous ancillary variables, such as streamflow anomalies. The model is fitted to pesticide data using maximum likelihood methods for censored data and is robust in terms of pesticide, stream location, and degree of censoring of the concentration data. This R package standardizes this methodology for trend analysis, documents the code, and provides help and tutorial information, as well as providing additional utility functions for plotting pesticide and other chemical concentration data.

  9. Strong Cosmic Censorship

    NASA Astrophysics Data System (ADS)

    Isenberg, James

    2017-01-01

    The Hawking-Penrose theorems tell us that solutions of Einstein's equations are generally singular, in the sense of the incompleteness of causal geodesics (the paths of physical observers). These singularities might be marked by the blowup of curvature and therefore crushing tidal forces, or by the breakdown of physical determinism. Penrose has conjectured (in his `Strong Cosmic Censorship Conjecture`) that it is generically unbounded curvature that causes singularities, rather than causal breakdown. The verification that ``AVTD behavior'' (marked by the domination of time derivatives over space derivatives) is generically present in a family of solutions has proven to be a useful tool for studying model versions of Strong Cosmic Censorship in that family. I discuss some of the history of Strong Cosmic Censorship, and then discuss what is known about AVTD behavior and Strong Cosmic Censorship in families of solutions defined by varying degrees of isometry, and discuss recent results which we believe will extend this knowledge and provide new support for Strong Cosmic Censorship. I also comment on some of the recent work on ``Weak Null Singularities'', and how this relates to Strong Cosmic Censorship.

  10. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  11. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  12. Is patience a virtue? Cosmic censorship of infrared effects in de Sitter

    NASA Astrophysics Data System (ADS)

    Ferreira, Ricardo Z.; Sandora, Mccullen; Sloth, Martin S.

    While the accumulation of long wavelength modes during inflation wreaks havoc on the large scale structure of spacetime, the question of even observability of their presence by any local observer has lead to considerable confusion. Though, it is commonly agreed that infrared effects are not visible to a single sub-horizon observer at late times, we argue that the question is less trivial for a patient observer who has lived long enough to have a record of the state before the soft mode was created. Though classically, there is no obstruction to measuring this effect locally, we give several indications that quantum mechanical uncertainties censor the effect, rendering the observation of long modes ultimately forbidden.

  13. Correcting for dependent censoring in routine outcome monitoring data by applying the inverse probability censoring weighted estimator.

    PubMed

    Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M

    2018-02-01

    Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.

  14. Acoustic instability driven by cosmic-ray streaming

    NASA Technical Reports Server (NTRS)

    Begelman, Mitchell C.; Zweibel, Ellen G.

    1994-01-01

    We study the linear stability of compressional waves in a medium through which cosmic rays stream at the Alfven speed due to strong coupling with Alfven waves. Acoustic waves can be driven unstable by the cosmic-ray drift, provided that the streaming speed is sufficiently large compared to the thermal sound speed. Two effects can cause instability: (1) the heating of the thermal gas due to the damping of Alfven waves driven unstable by cosmic-ray streaming; and (2) phase shifts in the cosmic-ray pressure perturbation caused by the combination of cosmic-ray streaming and diffusion. The instability does not depend on the magnitude of the background cosmic-ray pressure gradient, and occurs whether or not cosmic-ray diffusion is important relative to streaming. When the cosmic-ray pressure is small compared to the gas pressure, or cosmic-ray diffusion is strong, the instability manifests itself as a weak overstability of slow magnetosonic waves. Larger cosmic-ray pressure gives rise to new hybrid modes, which can be strongly unstable in the limits of both weak and strong cosmic-ray diffusion and in the presence of thermal conduction. Parts of our analysis parallel earlier work by McKenzie & Webb (which were brought to our attention after this paper was accepted for publication), but our treatment of diffusive effects, thermal conduction, and nonlinearities represent significant extensions. Although the linear growth rate of instability is independent of the background cosmic-ray pressure gradient, the onset of nonlinear eff ects does depend on absolute value of DEL (vector differential operator) P(sub c). At the onset of nonlinearity the fractional amplitude of cosmic-ray pressure perturbations is delta P(sub C)/P(sub C) approximately (kL) (exp -1) much less than 1, where k is the wavenumber and L is the pressure scale height of the unperturbed cosmic rays. We speculate that the instability may lead to a mode of cosmic-ray transport in which plateaus of uniform cosmic-ray pressure are separated by either laminar or turbulent jumps in which the thermal gas is subject to intense heating.

  15. Assessing the impact of censoring of costs and effects on health-care decision-making: an example using the Atrial Fibrillation Follow-up Investigation of Rhythm Management (AFFIRM) study.

    PubMed

    Fenwick, Elisabeth; Marshall, Deborah A; Blackhouse, Gordon; Vidaillet, Humberto; Slee, April; Shemanski, Lynn; Levy, Adrian R

    2008-01-01

    Losses to follow-up and administrative censoring can cloud the interpretation of trial-based economic evaluations. A number of investigators have examined the impact of different levels of adjustment for censoring, including nonadjustment, adjustment of effects only, and adjustment for both costs and effects. Nevertheless, there is a lack of research on the impact of censoring on decision-making. The objective of this study was to estimate the impact of adjustment for censoring on the interpretation of cost-effectiveness results and expected value of perfect information (EVPI), using a trial-based analysis that compared rate- and rhythm-control treatments for persons with atrial fibrillation. Three different levels of adjustment for censoring were examined: no censoring of cost and effects, censoring of effects only, and censoring of both costs and effects. In each case, bootstrapping was used to estimate the uncertainty incosts and effects, and the EVPI was calculated to determine the potential worth of further research. Censoring did not impact the adoption decision. Nevertheless, this was not the case for the decision uncertainty or the EVPI. For a threshold of $50,000 per life-year, the EVPI varied between $626,000 (partial censoring) to $117 million (full censoring) for the eligible US population. The level of adjustment for censoring in trial-based cost-effectiveness analyses can impact on the decisions to fund a new technology and to devote resources for further research. Only when censoring is taken into account for both costs and effects are these decisions appropriately addressed.

  16. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    PubMed

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  17. Accounting for dropout in xenografted tumour efficacy studies: integrated endpoint analysis, reduced bias and better use of animals.

    PubMed

    Martin, Emma C; Aarons, Leon; Yates, James W T

    2016-07-01

    Xenograft studies are commonly used to assess the efficacy of new compounds and characterise their dose-response relationship. Analysis often involves comparing the final tumour sizes across dose groups. This can cause bias, as often in xenograft studies a tumour burden limit (TBL) is imposed for ethical reasons, leading to the animals with the largest tumours being excluded from the final analysis. This means the average tumour size, particularly in the control group, is underestimated, leading to an underestimate of the treatment effect. Four methods to account for dropout due to the TBL are proposed, which use all the available data instead of only final observations: modelling, pattern mixture models, treating dropouts as censored using the M3 method and joint modelling of tumour growth and dropout. The methods were applied to both a simulated data set and a real example. All four proposed methods led to an improvement in the estimate of treatment effect in the simulated data. The joint modelling method performed most strongly, with the censoring method also providing a good estimate of the treatment effect, but with higher uncertainty. In the real data example, the dose-response estimated using the censoring and joint modelling methods was higher than the very flat curve estimated from average final measurements. Accounting for dropout using the proposed censoring or joint modelling methods allows the treatment effect to be recovered in studies where it may have been obscured due to dropout caused by the TBL.

  18. Emergence of Space-Time Localization and Cosmic Decoherence:. More on Irreversible Time, Dark Energy, Anti-Matter and Black-Holes

    NASA Astrophysics Data System (ADS)

    Magnon, Anne

    2005-04-01

    A non geometric cosmology is presented, based on logic of observability, where logical categories of our perception set frontiers to comprehensibility. The Big-Bang singularity finds here a substitute (comparable to a "quantum jump"): a logical process (tied to self-referent and divisible totality) by which information emerges, focalizes on events and recycles, providing a transition from incoherence to causal coherence. This jump manufactures causal order and space-time localization, as exact solutions to Einstein's equation, where the last step of the process disentangles complex Riemann spheres into real null-cones (a geometric overturning imposed by self-reference, reminding us of our ability to project the cosmos within our mental sphere). Concepts such as antimatter and dark energy (dual entities tied to bifurcations or broken symmetries, and their compensation), are presented as hidden in the virtual potentialities, while irreversible time appears with the recycling of information and related flow. Logical bifurcations (such as the "part-totality" category, a quantum of information which owes its recycling to non localizable logical separations, as anticipated by unstability or horizon dependence of the quantum vacuum) induce broken symmetries, at the (complex or real) geometric level [eg. the antiselfdual complex non linear graviton solutions, which break duality symmetry, provide a model for (hidden) anti-matter, itself compensated with dark-energy, and providing, with space-time localization, the radiative gravitational energy (Bondi flux and related bifurcations of the peeling off type), as well as mass of isolated bodies]. These bifurcations are compensated by inertial effects (non geometric precursors of the Coriolis forces) able to explain (on logical grounds) the cosmic expansion (a repulsion?) and critical equilibrium of the cosmic tissue. Space-time environment, itself, emerges through the jump, as a censor to totality, a screen to incoherence (as anticipated by black-hole event horizons, cosmic censors able to shelter causal geometry). In analogy with black-hole singularities, the Big-Bang can be viewed as a geometric hint that a transition from incoherence to (causal space-time) localization and related coherence (comprehensibility), is taking place (space-time demolition, a reverse process towards incoherence or information recycling, is expected in the vicinity of singularities, as hinted by black-holes and related "time-machines"). A theory of the emergence of perception (and life?), in connection with observability and the function of partition (able to screen totality), is on its way [interface incoherence-coherence, sleeping and awaking states of localization, horizons of perception etc, are anticipated by black-hole event horizons, beyond which a non causal, dimensionless incoherent regime or memorization process, presents itself with the loss of localization, suggesting a unifying regime (ultimate energies?) hidden in cosmic potentialities]. The decoherence process presented here, suggests an ultimate interaction, expression of the logical relation of subsystems to totality, and to be identified to the flow of information or its recycling through cosmic jump (this is anticipated by the dissipation of distance or hierarchies on null-cones, themselves recycled with information and events). The geometric projection of this unified irreversible dynamics is expressed by unified Yang-Mills field equations (coupled to Einsteinian gravity). An ultimate form of action ("set"-volumes of information) presents itself, whose extrema can be achieved through extremal transfer of information and related partition of cells of information (thus anticipating the mitosis of living cells, possibly triggered at the non localizable level, as imposed by the logical regime of cosmic decoherence: participating subsystems ?). The matching of the objective and subjective facets of (information and) decoherences is perceived as contact with a reality.

  19. Effect of censoring trace-level water-quality data on trend-detection capability

    USGS Publications Warehouse

    Gilliom, R.J.; Hirsch, R.M.; Gilroy, E.J.

    1984-01-01

    Monte Carlo experiments were used to evaluate whether trace-level water-quality data that are routinely censored (not reported) contain valuable information for trend detection. Measurements are commonly censored if they fall below a level associated with some minimum acceptable level of reliability (detection limit). Trace-level organic data were simulated with best- and worst-case estimates of measurement uncertainty, various concentrations and degrees of linear trend, and different censoring rules. The resulting classes of data were subjected to a nonparametric statistical test for trend. For all classes of data evaluated, trends were most effectively detected in uncensored data as compared to censored data even when the data censored were highly unreliable. Thus, censoring data at any concentration level may eliminate valuable information. Whether or not valuable information for trend analysis is, in fact, eliminated by censoring of actual rather than simulated data depends on whether the analytical process is in statistical control and bias is predictable for a particular type of chemical analyses.

  20. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  1. The Censored Mean-Level Detector for Multiple Target Environments.

    DTIC Science & Technology

    1984-03-01

    rate ( CFAR ) detectors known as censored mean-level detectors ( CMLD ). The CMLD , a special case of which is the mean-level detector (or zell-averaged...detectors known as censored mean- level detectors ( CMLD ). The CMLD , a special case of which is the mean-level detector (or cell-averaged CFAR detector), is...CENSORED MEAN-LEVEL DETECTOR The censored mean-level detector ( CMLD ) is a generalization of the traditional mean-level detector (MLD) or cell-averaged CFAR

  2. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets: I. Summary statistics

    USGS Publications Warehouse

    Antweiler, Ronald C.; Taylor, Howard E.

    2008-01-01

    The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.

  3. Estimating and Testing Mediation Effects with Censored Data

    ERIC Educational Resources Information Center

    Wang, Lijuan; Zhang, Zhiyong

    2011-01-01

    This study investigated influences of censored data on mediation analysis. Mediation effect estimates can be biased and inefficient with censoring on any one of the input, mediation, and output variables. A Bayesian Tobit approach was introduced to estimate and test mediation effects with censored data. Simulation results showed that the Bayesian…

  4. Censoring Freedom: Community-Based Professional Development and the Politics of Profanity

    ERIC Educational Resources Information Center

    Watson, Vajra M.

    2013-01-01

    The lack of strong literacy skills and practices among students is perhaps the clearest indicator that the education system continues to leave millions of children behind. To advance the reading, writing, and speaking skills of middle and high school students, this study examines a professional development model that brought trained…

  5. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  6. REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.

    PubMed

    Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.

  7. Time-dependent evolution of cosmic-ray-modified shock structure: Transition to steady state

    NASA Astrophysics Data System (ADS)

    Donohue, D. J.; Zank, G. P.; Webb, G. M.

    1994-03-01

    Steady state solutions to the two-fluid equations of cosmic-ray-modified shock structure were investigated first by Drury and Volk (1981). Their analysis revealed, among other properties, that there exist regions of upstream parameter space where the equations possess three different downstream solutions for a given upstream state. In this paper we investigate whether or not all these solutions can occur as time-asymptotic states in a physically realistic evolution. To do this, we investigate the time-dependent evolution of the two-fluid cosmic-ray equations in going from a specified initial condition to a steady state. Our results indicate that the time-asymptotic solution is strictly single-valued, and it undergoes a transition from weakly to strongly cosmic-ray-modified at a critical value of the upstream cosmic ray energy density. The expansion of supernova remnant shocks is considered as an example, and it is shown that the strong to weak transition is in fact more likely. The third intermediate solution is shown to influence the time-dependent evolution of the shock, but it is not found to be a stable time-asymptotic state. Timescales for convergence to these states and their implications for the efficiency of shock acceleration are considered. We also investigate the effects of a recently introduced model for the injection of seed particles into the shock accelerated cosmic-ray population. The injection is found to result in a more strongly cosmic-ray-dominated shock, which supports our conclusion that for most classes of intermediate and strong cosmic-ray-modified shocks, the downstream cosmic-ray pressure component is at least as large as the thermal gas pressure, independent of the upstream state. As a result, cosmic rays almost always play a significant role in determining the shock structure and dissipation and they cannot be regarded as test particles.

  8. Time-dependent evolution of cosmic-ray-modified shock structure: Transition to steady state

    NASA Technical Reports Server (NTRS)

    Donohue, D. J.; Zank, G. P.; Webb, G. M.

    1994-01-01

    Steady state solutions to the two-fluid equations of cosmic-ray-modified shock structure were investigated first by Drury and Volk (1981). Their analysis revealed, among other properties, that there exist regions of upstream parameter space where the equations possess three different downstream solutions for a given upstream state. In this paper we investigate whether or not all these solutions can occur as time-asymptotic states in a physically realistic evolution. To do this, we investigate the time-dependent evolution of the two-fluid cosmic-ray equations in going from a specified initial condition to a steady state. Our results indicate that the time-asymptotic solution is strictly single-valued, and it undergoes a transition from weakly to strongly cosmic-ray-modified at a critical value of the upstream cosmic ray energy density. The expansion of supernova remnant shocks is considered as an example, and it is shown that the strong to weak transition is in fact more likely. The third intermediate solution is shown to influence the time-dependent evolution of the shock, but it is not found to be a stable time-asymptotic state. Timescales for convergence to these states and their implications for the efficiency of shock acceleration are considered. We also investigate the effects of a recently introduced model for the injection of seed particles into the shock accelerated cosmic-ray population. The injection is found to result in a more strongly cosmic-ray-dominated shock, which supports our conclusion that for most classes of intermediate and strong cosmic-ray-modified shocks, the downstream cosmic-ray pressure component is at least as large as the thermal gas pressure, independent of the upstream state. As a result, cosmic rays almost always play a significant role in determining the shock structure and dissipation and they cannot be regarded as test particles.

  9. Inverse probability weighted least squares regression in the analysis of time-censored cost data: an evaluation of the approach using SEER-Medicare.

    PubMed

    Griffiths, Robert I; Gleeson, Michelle L; Danese, Mark D; O'Hagan, Anthony

    2012-01-01

    To assess the accuracy and precision of inverse probability weighted (IPW) least squares regression analysis for censored cost data. By using Surveillance, Epidemiology, and End Results-Medicare, we identified 1500 breast cancer patients who died and had complete cost information within the database. Patients were followed for up to 48 months (partitions) after diagnosis, and their actual total cost was calculated in each partition. We then simulated patterns of administrative and dropout censoring and also added censoring to patients receiving chemotherapy to simulate comparing a newer to older intervention. For each censoring simulation, we performed 1000 IPW regression analyses (bootstrap, sampling with replacement), calculated the average value of each coefficient in each partition, and summed the coefficients for each regression parameter to obtain the cumulative values from 1 to 48 months. The cumulative, 48-month, average cost was $67,796 (95% confidence interval [CI] $58,454-$78,291) with no censoring, $66,313 (95% CI $54,975-$80,074) with administrative censoring, and $66,765 (95% CI $54,510-$81,843) with administrative plus dropout censoring. In multivariate analysis, chemotherapy was associated with increased cost of $25,325 (95% CI $17,549-$32,827) compared with $28,937 (95% CI $20,510-$37,088) with administrative censoring and $29,593 ($20,564-$39,399) with administrative plus dropout censoring. Adding censoring to the chemotherapy group resulted in less accurate IPW estimates. This was ameliorated, however, by applying IPW within treatment groups. IPW is a consistent estimator of population mean costs if the weight is correctly specified. If the censoring distribution depends on some covariates, a model that accommodates this dependency must be correctly specified in IPW to obtain accurate estimates. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Impact of censoring on learning Bayesian networks in survival modelling.

    PubMed

    Stajduhar, Ivan; Dalbelo-Basić, Bojana; Bogunović, Nikola

    2009-11-01

    Bayesian networks are commonly used for presenting uncertainty and covariate interactions in an easily interpretable way. Because of their efficient inference and ability to represent causal relationships, they are an excellent choice for medical decision support systems in diagnosis, treatment, and prognosis. Although good procedures for learning Bayesian networks from data have been defined, their performance in learning from censored survival data has not been widely studied. In this paper, we explore how to use these procedures to learn about possible interactions between prognostic factors and their influence on the variate of interest. We study how censoring affects the probability of learning correct Bayesian network structures. Additionally, we analyse the potential usefulness of the learnt models for predicting the time-independent probability of an event of interest. We analysed the influence of censoring with a simulation on synthetic data sampled from randomly generated Bayesian networks. We used two well-known methods for learning Bayesian networks from data: a constraint-based method and a score-based method. We compared the performance of each method under different levels of censoring to those of the naive Bayes classifier and the proportional hazards model. We did additional experiments on several datasets from real-world medical domains. The machine-learning methods treated censored cases in the data as event-free. We report and compare results for several commonly used model evaluation metrics. On average, the proportional hazards method outperformed other methods in most censoring setups. As part of the simulation study, we also analysed structural similarities of the learnt networks. Heavy censoring, as opposed to no censoring, produces up to a 5% surplus and up to 10% missing total arcs. It also produces up to 50% missing arcs that should originally be connected to the variate of interest. Presented methods for learning Bayesian networks from data can be used to learn from censored survival data in the presence of light censoring (up to 20%) by treating censored cases as event-free. Given intermediate or heavy censoring, the learnt models become tuned to the majority class and would thus require a different approach.

  11. The place of the Local Group in the cosmic web

    NASA Astrophysics Data System (ADS)

    Forero-Romero, Jaime E.; González, Roberto

    2016-10-01

    We use the Bolshoi Simulation to find the most probable location of the Local Group (LG) in the cosmic web. Our LG simulacra are pairs of halos with isolation and kinematic properties consistent with observations. The cosmic web is defined using a tidal tensor approach. We find that the LG's preferred location is regions with a dark matter overdensity close to the cosmic average. This makes filaments and sheets the preferred environment. We also find a strong alignment between the LG and the cosmic web. The orbital angular momentum is preferentially perpendicular to the smallest tidal eigenvector, while the vector connecting the two halos is strongly aligned along the the smallest tidal eigenvector and perpendicular to the largest tidal eigenvector; the pair lies and moves along filaments and sheets. We do not find any evidence for an alignment between the spin of each halo in the pair and the cosmic web.

  12. Small values in big data: The continuing need for appropriate metadata

    USGS Publications Warehouse

    Stow, Craig A.; Webster, Katherine E.; Wagner, Tyler; Lottig, Noah R.; Soranno, Patricia A.; Cha, YoonKyung

    2018-01-01

    Compiling data from disparate sources to address pressing ecological issues is increasingly common. Many ecological datasets contain left-censored data – observations below an analytical detection limit. Studies from single and typically small datasets show that common approaches for handling censored data — e.g., deletion or substituting fixed values — result in systematic biases. However, no studies have explored the degree to which the documentation and presence of censored data influence outcomes from large, multi-sourced datasets. We describe left-censored data in a lake water quality database assembled from 74 sources and illustrate the challenges of dealing with small values in big data, including detection limits that are absent, range widely, and show trends over time. We show that substitutions of censored data can also bias analyses using ‘big data’ datasets, that censored data can be effectively handled with modern quantitative approaches, but that such approaches rely on accurate metadata that describe treatment of censored data from each source.

  13. Simulation of parametric model towards the fixed covariate of right censored lung cancer data

    NASA Astrophysics Data System (ADS)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila

    2017-09-01

    In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.

  14. Strong cosmic censorship in de Sitter space

    NASA Astrophysics Data System (ADS)

    Dias, Oscar J. C.; Eperon, Felicity C.; Reall, Harvey S.; Santos, Jorge E.

    2018-05-01

    Recent work indicates that the strong cosmic censorship hypothesis is violated by nearly extremal Reissner-Nordström-de Sitter black holes. It was argued that perturbations of such a black hole decay sufficiently rapidly that the perturbed spacetime can be extended across the Cauchy horizon as a weak solution of the equations of motion. In this paper we consider the case of Kerr-de Sitter black holes. We find that, for any nonextremal value of the black hole parameters, there are quasinormal modes which decay sufficiently slowly to ensure that strong cosmic censorship is respected. Our analysis covers both scalar field and linearized gravitational perturbations.

  15. Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study.

    PubMed

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2017-01-01

    The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study.

  16. Cosmic Microwave Background Timeline

    Science.gov Websites

    about 2.3 K 1948: George Gamow, Ralph Alpher, and Robert Herman predict that a Big Bang universe perfect blackbody spectrum and thereby strongly supporting the hot big bang model, the thermal history of anisotropy in the cosmic microwave background, this strongly supports the big bang model with gravitational

  17. Regression analysis of case K interval-censored failure time data in the presence of informative censoring.

    PubMed

    Wang, Peijie; Zhao, Hui; Sun, Jianguo

    2016-12-01

    Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.

  18. Galactic Cosmic-Ray Anistropy During the Forbush Decrease Starting 2013 April 13

    NASA Astrophysics Data System (ADS)

    Tortermpun, U.; Ruffolo, D.; Bieber, J. W.

    2018-01-01

    The flux of Galactic cosmic rays (GCRs) can undergo a Forbush decrease (FD) during the passage of a shock, sheath region, or magnetic flux rope associated with a coronal mass ejection (CME). Cosmic-ray observations during FDs can provide information complementary to in situ observations of the local plasma and magnetic field, because cosmic-ray distributions allow remote sensing of distant conditions. Here we develop techniques to determine the GCR anisotropy before and during an FD using data from the worldwide network of neutron monitors, for a case study of the FD starting on 2013 April 13. We find that at times with strong magnetic fluctuations and strong cosmic-ray scattering, there were spikes of high perpendicular anisotropy and weak parallel anisotropy. In contrast, within the CME flux rope there was a strong parallel anisotropy in the direction predicted from a theory of drift motions into one leg of the magnetic flux rope and out the other, confirming that the anisotropy can remotely sense a large-scale flow of GCRs through a magnetic flux structure.

  19. A review of statistical issues with progression-free survival as an interval-censored time-to-event endpoint.

    PubMed

    Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang

    2013-01-01

    Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.

  20. Self-force as a cosmic censor in the Kerr overspinning problem

    NASA Astrophysics Data System (ADS)

    Colleoni, Marta; Barack, Leor; Shah, Abhay G.; van de Meent, Maarten

    2015-10-01

    It is known that a near-extremal Kerr black hole can be spun up beyond its extremal limit by capturing a test particle. Here we show that overspinning is always averted once backreaction from the particle's own gravity is properly taken into account. We focus on nonspinning, uncharged, massive particles thrown in along the equatorial plane and work in the first-order self-force approximation (i.e., we include all relevant corrections to the particle's acceleration through linear order in the ratio, assumed small, between the particle's energy and the black hole's mass). Our calculation is a numerical implementation of a recent analysis by two of us [Phys. Rev. D 91, 104024 (2015)], in which a necessary and sufficient "censorship" condition was formulated for the capture scenario, involving certain self-force quantities calculated on the one-parameter family of unstable circular geodesics in the extremal limit. The self-force information accounts both for radiative losses and for the finite-mass correction to the critical value of the impact parameter. Here we obtain the required self-force data and present strong evidence to suggest that captured particles never drive the black hole beyond its extremal limit. We show, however, that, within our first-order self-force approximation, it is possible to reach the extremal limit with a suitable choice of initial orbital parameters. To rule out such a possibility would require (currently unavailable) information about higher-order self-force corrections.

  1. Quantum Backreaction on Three-Dimensional Black Holes and Naked Singularities.

    PubMed

    Casals, Marc; Fabbri, Alessandro; Martínez, Cristián; Zanelli, Jorge

    2017-03-31

    We analytically investigate backreaction by a quantum scalar field on two rotating Bañados-Teitelboim-Zanelli (BTZ) geometries: that of a black hole and that of a naked singularity. In the former case, we explore the quantum effects on various regions of relevance for a rotating black hole space-time. We find that the quantum effects lead to a growth of both the event horizon and the radius of the ergosphere, and to a reduction of the angular velocity, compared to the unperturbed values. Furthermore, they give rise to the formation of a curvature singularity at the Cauchy horizon and show no evidence of the appearance of a superradiant instability. In the case of a naked singularity, we find that quantum effects lead to the formation of a horizon that shields it, thus supporting evidence for the rôle of quantum mechanics as a cosmic censor in nature.

  2. A random-censoring Poisson model for underreported data.

    PubMed

    de Oliveira, Guilherme Lopes; Loschi, Rosangela Helena; Assunção, Renato Martins

    2017-12-30

    A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random-censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Censored quantile regression with recursive partitioning-based weights

    PubMed Central

    Wey, Andrew; Wang, Lan; Rudser, Kyle

    2014-01-01

    Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800

  4. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    NASA Astrophysics Data System (ADS)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  5. Some insight on censored cost estimators.

    PubMed

    Zhao, H; Cheng, Y; Bang, H

    2011-08-30

    Censored survival data analysis has been studied for many years. Yet, the analysis of censored mark variables, such as medical cost, quality-adjusted lifetime, and repeated events, faces a unique challenge that makes standard survival analysis techniques invalid. Because of the 'informative' censorship imbedded in censored mark variables, the use of the Kaplan-Meier (Journal of the American Statistical Association 1958; 53:457-481) estimator, as an example, will produce biased estimates. Innovative estimators have been developed in the past decade in order to handle this issue. Even though consistent estimators have been proposed, the formulations and interpretations of some estimators are less intuitive to practitioners. On the other hand, more intuitive estimators have been proposed, but their mathematical properties have not been established. In this paper, we prove the analytic identity between some estimators (a statistically motivated estimator and an intuitive estimator) for censored cost data. Efron (1967) made similar investigation for censored survival data (between the Kaplan-Meier estimator and the redistribute-to-the-right algorithm). Therefore, we view our study as an extension of Efron's work to informatively censored data so that our findings could be applied to other marked variables. Copyright © 2011 John Wiley & Sons, Ltd.

  6. A multivariate cure model for left-censored and right-censored data with application to colorectal cancer screening patterns.

    PubMed

    Hagar, Yolanda C; Harvey, Danielle J; Beckett, Laurel A

    2016-08-30

    We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left-censored and right-censored, and some individuals are never screened (the 'cured' population). We propose a multivariate parametric cure model that can be used with left-censored and right-censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within-subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER-Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Constraints on cosmic ray and PeV neutrino production in blazars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B. Theodore; Li, Zhuo, E-mail: zhangbing91@pku.edu.cn, E-mail: zhuo.li@pku.edu.cn

    2017-03-01

    IceCube has detected a cumulative flux of PeV neutrinos, which origin is unknown. Blazars, active galactic nuclei with relativistic jets pointing to us, are long and widely expected to be one of the strong candidates of high energy neutrino sources. The neutrino production depends strongly on the cosmic ray power of blazar jets, which is largely unknown. The recent null results in stacking searches of neutrinos for several blazar samples by IceCube put upper limits on the neutrino fluxes from these blazars. Here we compute the cosmic ray power and PeV neutrino flux of Fermi-LAT blazars, and find that themore » upper limits for known blazar sources give stringent constraint on the cosmic ray loading factor of blazar jets (i.e., the ratio of the cosmic ray to bolometric radiation luminosity of blazar jets), ξ{sub cr} ∼< (2–10)ζ{sup −1} (with ζ ∼< 1 the remained fraction of cosmic ray energy when propagate into the blazar broad line region) for flat cosmic ray spectrum, and that the cumulative PeV neutrino flux contributed by all-sky blazars is a fraction ∼< (10–50)% of the IceCube detected flux.« less

  8. Modification of the parallel scattering mean free path of cosmic rays in the presence of adiabatic focusing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, H.-Q.; Schlickeiser, R., E-mail: hqhe@mail.iggcas.ac.cn, E-mail: rsch@tp4.rub.de

    The cosmic ray mean free path in a large-scale nonuniform guide magnetic field with superposed magnetostatic turbulence is calculated to clarify some conflicting results in the literature. A new, exact integro-differential equation for the cosmic-ray anisotropy is derived from the Fokker-Planck transport equation. A perturbation analysis of this integro-differential equation leads to an analytical expression for the cosmic ray anisotropy and the focused transport equation for the isotropic part of the cosmic ray distribution function. The derived parallel spatial diffusion coefficient and the associated cosmic ray mean free path include the effect of adiabatic focusing and reduce to the standardmore » forms in the limit of a uniform guide magnetic field. For the illustrative case of isotropic pitch angle scattering, the derived mean free path agrees with the earlier expressions of Beeck and Wibberenz, Bieber and Burger, Kota, and Litvinenko, but disagrees with the result of Shalchi. The disagreement with the expression of Shalchi is particularly strong in the limit of strong adiabatic focusing.« less

  9. A cigarette manufacturer and a managed care company collaborate to censor health information targeted at employees.

    PubMed

    Muggli, Monique E; Hurt, Richard D

    2004-08-01

    A review of internal tobacco company documents showed that the tobacco company Philip Morris and the insurance company CIGNA collaborated to censor accurate information on the harm of smoking and on environmental tobacco smoke exposure from CIGNA health newsletters sent to employees of Philip Morris and its affiliates. From 1996 to 1998, 5 of the 8 CIGNA newsletters discussed in the internal tobacco documents were censored.We recommend that accrediting bodies mandate that health plans not censor employee-directed health information at the request of employers.

  10. SEMIPARAMETRIC EFFICIENT ESTIMATION FOR SHARED-FRAILTY MODELS WITH DOUBLY-CENSORED CLUSTERED DATA

    PubMed Central

    Wang, Jane-Ling

    2018-01-01

    In this paper, we investigate frailty models for clustered survival data that are subject to both left- and right-censoring, termed “doubly-censored data”. This model extends current survival literature by broadening the application of frailty models from right-censoring to a more complicated situation with additional left censoring. Our approach is motivated by a recent Hepatitis B study where the sample consists of families. We adopt a likelihood approach that aims at the nonparametric maximum likelihood estimators (NPMLE). A new algorithm is proposed, which not only works well for clustered data but also improve over existing algorithm for independent and doubly-censored data, a special case when the frailty variable is a constant equal to one. This special case is well known to be a computational challenge due to the left censoring feature of the data. The new algorithm not only resolves this challenge but also accommodate the additional frailty variable effectively. Asymptotic properties of the NPMLE are established along with semi-parametric efficiency of the NPMLE for the finite-dimensional parameters. The consistency of Bootstrap estimators for the standard errors of the NPMLE is also discussed. We conducted some simulations to illustrate the numerical performance and robustness of the proposed algorithm, which is also applied to the Hepatitis B data. PMID:29527068

  11. An identifiable model for informative censoring

    USGS Publications Warehouse

    Link, W.A.; Wegman, E.J.; Gantz, D.T.; Miller, J.J.

    1988-01-01

    The usual model for censored survival analysis requires the assumption that censoring of observations arises only due to causes unrelated to the lifetime under consideration. It is easy to envision situations in which this assumption is unwarranted, and in which use of the Kaplan-Meier estimator and associated techniques will lead to unreliable analyses.

  12. Estimation of Recurrence of Colorectal Adenomas with Dependent Censoring Using Weighted Logistic Regression

    PubMed Central

    Hsu, Chiu-Hsieh; Li, Yisheng; Long, Qi; Zhao, Qiuhong; Lance, Peter

    2011-01-01

    In colorectal polyp prevention trials, estimation of the rate of recurrence of adenomas at the end of the trial may be complicated by dependent censoring, that is, time to follow-up colonoscopy and dropout may be dependent on time to recurrence. Assuming that the auxiliary variables capture the dependence between recurrence and censoring times, we propose to fit two working models with the auxiliary variables as covariates to define risk groups and then extend an existing weighted logistic regression method for independent censoring to each risk group to accommodate potential dependent censoring. In a simulation study, we show that the proposed method results in both a gain in efficiency and reduction in bias for estimating the recurrence rate. We illustrate the methodology by analyzing a recurrent adenoma dataset from a colorectal polyp prevention trial. PMID:22065985

  13. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    PubMed

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Sieve estimation in a Markov illness-death process under dual censoring.

    PubMed

    Boruvka, Audrey; Cook, Richard J

    2016-04-01

    Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Measurement of secondary cosmic ray intensity at Regener-Pfotzer height using low-cost weather balloons and its correlation with solar activity

    NASA Astrophysics Data System (ADS)

    Sarkar, Ritabrata; Chakrabarti, Sandip K.; Pal, Partha Sarathi; Bhowmick, Debashis; Bhattacharya, Arnab

    2017-09-01

    Cosmic ray flux in our planetary system is primarily modulated by solar activity. Radiation effects of cosmic rays on the Earth strongly depend on latitude due to the variation of the geomagnetic field strength. To study these effects we carried out a series of measurements of the radiation characteristics in the atmosphere due to cosmic rays from various places (geomagnetic latitude: ∼14.50°N) in West Bengal, India, located near the Tropic of Cancer, for several years (2012-2016) particularly covering the solar maximum in the 24th solar cycle. We present low energy (15-140 keV) secondary radiation measurement results extending from the ground till the near space (∼40 km) using a scintillator detector on board rubber weather balloons. We also concentrate on the cosmic ray intensity at the Regener-Pfotzer maxima and find a strong anti-correlation between this intensity and the solar activity even at low geomagnetic latitudes.

  16. The concordance index C and the Mann-Whitney parameter Pr(X>Y) with randomly censored data.

    PubMed

    Koziol, James A; Jia, Zhenyu

    2009-06-01

    Harrell's c-index or concordance C has been widely used as a measure of separation of two survival distributions. In the absence of censored data, the c-index estimates the Mann-Whitney parameter Pr(X>Y), which has been repeatedly utilized in various statistical contexts. In the presence of randomly censored data, the c-index no longer estimates Pr(X>Y); rather, a parameter that involves the underlying censoring distributions. This is in contrast to Efron's maximum likelihood estimator of the Mann-Whitney parameter, which is recommended in the setting of random censorship.

  17. Threshold regression to accommodate a censored covariate.

    PubMed

    Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A

    2018-06-22

    In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.

  18. Cosmic Censorship for Gowdy Spacetimes.

    PubMed

    Ringström, Hans

    2010-01-01

    Due to the complexity of Einstein's equations, it is often natural to study a question of interest in the framework of a restricted class of solutions. One way to impose a restriction is to consider solutions satisfying a given symmetry condition. There are many possible choices, but the present article is concerned with one particular choice, which we shall refer to as Gowdy symmetry. We begin by explaining the origin and meaning of this symmetry type, which has been used as a simplifying assumption in various contexts, some of which we shall mention. Nevertheless, the subject of interest here is strong cosmic censorship. Consequently, after having described what the Gowdy class of spacetimes is, we describe, as seen from the perspective of a mathematician, what is meant by strong cosmic censorship. The existing results on cosmic censorship are based on a detailed analysis of the asymptotic behavior of solutions. This analysis is in part motivated by conjectures, such as the BKL conjecture, which we shall therefore briefly describe. However, the emphasis of the article is on the mathematical analysis of the asymptotics, due to its central importance in the proof and in the hope that it might be of relevance more generally. The article ends with a description of the results that have been obtained concerning strong cosmic censorship in the class of Gowdy spacetimes.

  19. Twenty five years long survival analysis of an individual shortleaf pine trees

    Treesearch

    Pradip Saud; Thomas B. Lynch; James M. Guldin

    2016-01-01

    A semi parametric cox proportion hazard model is preferred when censored data and survival time information is available (Kleinbaum and Klein 1996; Alison 2010). Censored data are observations that have incomplete information related to survival time or event time of interest. In repeated forest measurements, usually observations are either right censored or...

  20. Diffusion of strongly magnetized cosmic ray particles in a turbulent medium

    NASA Technical Reports Server (NTRS)

    Ptuskin, V. S.

    1985-01-01

    Cosmic ray (CR) propagation in a turbulent medium is usually considered in the diffusion approximation. Here, the diffusion equation is obtained for strongly magnetized particles in the general form. The influence of a large-scale random magnetic field on CR propagation in interstellar medium is discussed. Cosmic rays are assumed to propagate in a medium with a regular field H and an ensemble of random MHD waves. The energy density of waves on scales smaller than the free path 1 of CR particles is small. The collision integral of the general form which describes interaction between relativistic particles and waves in the quasilinear approximation is used.

  1. A Local Agreement Pattern Measure Based on Hazard Functions for Survival Outcomes

    PubMed Central

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K.

    2017-01-01

    Summary Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this paper, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. PMID:28724196

  2. A local agreement pattern measure based on hazard functions for survival outcomes.

    PubMed

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K

    2018-03-01

    Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this article, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. © 2017, The International Biometric Society.

  3. Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Townsend, Lawrence W.

    1992-01-01

    Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.

  4. Evaluation of methods for managing censored results when calculating the geometric mean.

    PubMed

    Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M

    2018-01-01

    Currently, there are conflicting views on the best statistical methods for managing censored environmental data. The method commonly applied by environmental science researchers and professionals is to substitute half the limit of reporting for derivation of summary statistics. This approach has been criticised by some researchers, raising questions around the interpretation of historical scientific data. This study evaluated four complete soil datasets, at three levels of simulated censorship, to test the accuracy of a range of censored data management methods for calculation of the geometric mean. The methods assessed included removal of censored results, substitution of a fixed value (near zero, half the limit of reporting and the limit of reporting), substitution by nearest neighbour imputation, maximum likelihood estimation, regression on order substitution and Kaplan-Meier/survival analysis. This is the first time such a comprehensive range of censored data management methods have been applied to assess the accuracy of calculation of the geometric mean. The results of this study show that, for describing the geometric mean, the simple method of substitution of half the limit of reporting is comparable or more accurate than alternative censored data management methods, including nearest neighbour imputation methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Calibrating accelerometer sensor on android phone with Accelerograph TDL 303 QS for earthquake online recorder

    NASA Astrophysics Data System (ADS)

    Riantana, R.; Darsono, D.; Triyono, A.; Azimut, H. B.

    2016-11-01

    Calibration of the android censor was done by placing the device in a mounting at side of accelerograph TDL 303 QS that will be a means of comparison. Leveling of both devices was set same, so that the state of the device can be assumed same anyway. Then applied vibrations in order to have the maximum amplitude value of both censor, so it can be found equality of the coefficient of proportionality both of them. The results on both devices obtain the Peak Ground Acceleration (PGA) as follows, on the x axis (EW) android censor is obtained PGA -2.4478145 gal than at TDL 303 QS obtained PGA -2.5504 gal, the y-axis (NS) on the censor android obtained PGA 3.0066964 gal than at TDL 303 QS obtained PGA 3.2073 gal, the z-axis (UD) on the android censor obtained PGA -14.0702377 gal than at TDL 303 QS obtained PGA -13.2927 gal, A correction value for android accelerometer censor is ± 0.1 gal for the x-axis (EW), ± 0.2 gal for the y-axis (NS), and ± 0.7 gal for the z-axis (UD).

  6. A nonparametric method for assessment of interactions in a median regression model for analyzing right censored data.

    PubMed

    Lee, MinJae; Rahbar, Mohammad H; Talebi, Hooshang

    2018-01-01

    We propose a nonparametric test for interactions when we are concerned with investigation of the simultaneous effects of two or more factors in a median regression model with right censored survival data. Our approach is developed to detect interaction in special situations, when the covariates have a finite number of levels with a limited number of observations in each level, and it allows varying levels of variance and censorship at different levels of the covariates. Through simulation studies, we compare the power of detecting an interaction between the study group variable and a covariate using our proposed procedure with that of the Cox Proportional Hazard (PH) model and censored quantile regression model. We also assess the impact of censoring rate and type on the standard error of the estimators of parameters. Finally, we illustrate application of our proposed method to real life data from Prospective Observational Multicenter Major Trauma Transfusion (PROMMTT) study to test an interaction effect between type of injury and study sites using median time for a trauma patient to receive three units of red blood cells. The results from simulation studies indicate that our procedure performs better than both Cox PH model and censored quantile regression model based on statistical power for detecting the interaction, especially when the number of observations is small. It is also relatively less sensitive to censoring rates or even the presence of conditionally independent censoring that is conditional on the levels of covariates.

  7. A cosmic-ray-mediated shock in the solar system

    NASA Technical Reports Server (NTRS)

    Eichler, D.

    1981-01-01

    It is pointed out that the flare-induced blast wave of Aug. 4, 1972, the most violent disturbance in the solar wind on record, produced cosmic rays with an efficiency of about 50%. Such a high efficiency is predicted by the self-regulating production model of cosmic-ray origin in shocks. Most interplanetary shocks, according to simple theoretical analysis, are not strong enough to produce cosmic rays efficiently. However, if shock strength is the key parameter governing efficiency, as present interplanetary data suggest, then shocks from supernova blasts, quasar outbursts, and other violent astrophysical phenomena should be extremely efficient sources of cosmic rays.

  8. Influence of Dust Loading on Atmospheric Ionizing Radiation on Mars

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Gronoff, Guillaume; Mertens, Christopher J.

    2014-01-01

    Measuring the radiation environment at the surface of Mars is the primary goal of the Radiation Assessment Detector on the NASA Mars Science Laboratory's Curiosity rover. One of the conditions that Curiosity will likely encounter is a dust storm. The objective of this paper is to compute the cosmic ray ionization in different conditions, including dust storms, as these various conditions are likely to be encountered by Curiosity at some point. In the present work, the Nowcast of Atmospheric Ionizing Radiation for Aviation Safety model, recently modified for Mars, was used along with the Badhwar & O'Neill 2010 galactic cosmic ray model. In addition to galactic cosmic rays, five different solar energetic particle event spectra were considered. For all input radiation environments, radiation dose throughout the atmosphere and at the surface was investigated as a function of atmospheric dust loading. It is demonstrated that for galactic cosmic rays, the ionization depends strongly on the atmosphere profile. Moreover, it is shown that solar energetic particle events strongly increase the ionization throughout the atmosphere, including ground level, and can account for the radio blackout conditions observed by the Mars Advanced Radar for Subsurface and Ionospheric Sounding instrument on the Mars Express spacecraft. These results demonstrate that the cosmic rays' influence on the Martian surface chemistry is strongly dependent on solar and atmospheric conditions that should be taken into account for future studies.

  9. Two-sample tests and one-way MANOVA for multivariate biomarker data with nondetects.

    PubMed

    Thulin, M

    2016-09-10

    Testing whether the mean vector of a multivariate set of biomarkers differs between several populations is an increasingly common problem in medical research. Biomarker data is often left censored because some measurements fall below the laboratory's detection limit. We investigate how such censoring affects multivariate two-sample and one-way multivariate analysis of variance tests. Type I error rates, power and robustness to increasing censoring are studied, under both normality and non-normality. Parametric tests are found to perform better than non-parametric alternatives, indicating that the current recommendations for analysis of censored multivariate data may have to be revised. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Supernova Remnant Kes 17: An Efficient Cosmic Ray Accelerator inside a Molecular Cloud

    NASA Astrophysics Data System (ADS)

    Gelfand, Joseph; Slane, Patrick; Hughes, John; Temim, Tea; Castro, Daniel; Rakowski, Cara

    Supernova remnant are believed to be the dominant source of cosmic rays protons below the "knee" in the energy spectrum. However, relatively few supernova remnants have been identified as efficient producers of cosmic ray protons. In this talk, I will present evidence that the production of cosmic ray protons is required to explain the broadband non-thermal spectrum of supernova remnant Kes 17 (SNR G304.6+0.1). Evidence for efficient cosmic ray acceleration in Kes 17 supports recent theoretical work concluding that the strong magnetic field, turbulence, and clumpy nature of molecular clouds enhance cosmic ray production in supernova remnants. While additional observations are needed to confirm this interpretation, further study of Kes 17 and similar sources are important for understanding how cosmic rays are accelerated in supernova remnants.

  11. On the origin of cosmic rays. [gamma rays and supernova remnants

    NASA Technical Reports Server (NTRS)

    Stecker, F. W.

    1975-01-01

    Using Recent surveys of molecular clouds and gamma rays in the galaxy, it is possible to determine the distribution of 1 to 10 GeV cosmic-ray nucleons in the galaxy. This distribution appears to be identical to the supernova remnant distribution to within experimental error and provides strong support for the hypothesis that supernovae produce most of the observed cosmic rays. This distribution resembles that of OB associations of average age approximately 30 million years suggesting that cosmic rays are produced by population objects about 30 million years after their birth.

  12. A proportional hazards regression model for the subdistribution with right-censored and left-truncated competing risks data

    PubMed Central

    Zhang, Xu; Zhang, Mei-Jie; Fine, Jason

    2012-01-01

    With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288

  13. JPRS Report, Near East and South Asia.

    DTIC Science & Technology

    1991-07-08

    and our culture. We collaborators two excellent connoisseurs of Berber lan- are not going to play at being censors ," Ramdane adds. guage and culture, T...its conceived plan, wars will be started between countries propaganda war, America censored war coverage. The for their consumption. There is only one...way to avoid Western media which protested censor restrictions this new international imperialist system and that is, just imposed during the

  14. Work Status Choice and the Distribution of Family Earnings.

    DTIC Science & Technology

    1984-11-01

    were in the market. Since wages for secondary earners are observable only for market participants, censoring corrections will have to be made to...obtain the true correlation of earners’ earnings. The problem of censoring corrections has been extensively studied in the female labor supply...earners are defined to be male members other than the HH. The censoring framework is fairly similar to the occupation-choice model discussed earlier

  15. Influence assessment in censored mixed-effects models using the multivariate Student’s-t distribution

    PubMed Central

    Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.

    2015-01-01

    In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871

  16. Double inverse-weighted estimation of cumulative treatment effects under nonproportional hazards and dependent censoring.

    PubMed

    Schaubel, Douglas E; Wei, Guanghui

    2011-03-01

    In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.

  17. On the estimation of intracluster correlation for time-to-event outcomes in cluster randomized trials.

    PubMed

    Kalia, Sumeet; Klar, Neil; Donner, Allan

    2016-12-30

    Cluster randomized trials (CRTs) involve the random assignment of intact social units rather than independent subjects to intervention groups. Time-to-event outcomes often are endpoints in CRTs. Analyses of such data need to account for the correlation among cluster members. The intracluster correlation coefficient (ICC) is used to assess the similarity among binary and continuous outcomes that belong to the same cluster. However, estimating the ICC in CRTs with time-to-event outcomes is a challenge because of the presence of censored observations. The literature suggests that the ICC may be estimated using either censoring indicators or observed event times. A simulation study explores the effect of administrative censoring on estimating the ICC. Results show that ICC estimators derived from censoring indicators or observed event times are negatively biased. Analytic work further supports these results. Observed event times are preferred to estimate the ICC under minimum frequency of administrative censoring. To our knowledge, the existing literature provides no practical guidance on the estimation of ICC when substantial amount of administrative censoring is present. The results from this study corroborate the need for further methodological research on estimating the ICC for correlated time-to-event outcomes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Regulating cinematic stories about reproduction: pregnancy, childbirth, abortion and movie censorship in the US, 1930-1958.

    PubMed

    Kirby, David A

    2017-09-01

    In the mid-twentieth century film studios sent their screenplays to Hollywood's official censorship body, the Production Code Administration (PCA), and to the Catholic Church's Legion of Decency for approval and recommendations for revision. This article examines the negotiations between filmmakers and censorship groups in order to show the stories that censors did, and did not, want told about pregnancy, childbirth and abortion, as well as how studios fought to tell their own stories about human reproduction. I find that censors considered pregnancy to be a state of grace and a holy obligation that was restricted to married women. For censors, human reproduction was not only a private matter, it was also an unpleasant biological process whose entertainment value was questionable. They worried that realistic portrayals of pregnancy and childbirth would scare young women away from pursuing motherhood. In addition, I demonstrate how filmmakers overcame censors' strict prohibitions against abortion by utilizing ambiguity in their storytelling. Ultimately, I argue that censors believed that pregnancy and childbirth should be celebrated but not seen. But if pregnancy and childbirth were required then censors preferred mythic versions of motherhood instead of what they believed to be the sacred but horrific biological reality of human reproduction.

  19. Censoring approach to the detection limits in X-ray fluorescence analysis

    NASA Astrophysics Data System (ADS)

    Pajek, M.; Kubala-Kukuś, A.

    2004-10-01

    We demonstrate that the effect of detection limits in the X-ray fluorescence analysis (XRF), which limits the determination of very low concentrations of trace elements and results in appearance of the so-called "nondetects", can be accounted for using the statistical concept of censoring. More precisely, the results of such measurements can be viewed as the left random censored data, which can further be analyzed using the Kaplan-Meier method correcting the data for the presence of nondetects. Using this approach, the results of measured, detection limit censored concentrations can be interpreted in a nonparametric manner including the correction for the nondetects, i.e. the measurements in which the concentrations were found to be below the actual detection limits. Moreover, using the Monte Carlo simulation technique we show that by using the Kaplan-Meier approach the corrected mean concentrations for a population of the samples can be estimated within a few percent uncertainties with respect of the simulated, uncensored data. This practically means that the final uncertainties of estimated mean values are limited in fact by the number of studied samples and not by the correction procedure itself. The discussed random-left censoring approach was applied to analyze the XRF detection-limit-censored concentration measurements of trace elements in biomedical samples.

  20. The Gumbel hypothesis test for left censored observations using regional earthquake records as an example

    NASA Astrophysics Data System (ADS)

    Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.

    2011-01-01

    Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.

  1. Using multiple classifiers for predicting the risk of endovascular aortic aneurysm repair re-intervention through hybrid feature selection.

    PubMed

    Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter Je; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong

    2017-11-01

    Feature selection is essential in medical area; however, its process becomes complicated with the presence of censoring which is the unique character of survival analysis. Most survival feature selection methods are based on Cox's proportional hazard model, though machine learning classifiers are preferred. They are less employed in survival analysis due to censoring which prevents them from directly being used to survival data. Among the few work that employed machine learning classifiers, partial logistic artificial neural network with auto-relevance determination is a well-known method that deals with censoring and perform feature selection for survival data. However, it depends on data replication to handle censoring which leads to unbalanced and biased prediction results especially in highly censored data. Other methods cannot deal with high censoring. Therefore, in this article, a new hybrid feature selection method is proposed which presents a solution to high level censoring. It combines support vector machine, neural network, and K-nearest neighbor classifiers using simple majority voting and a new weighted majority voting method based on survival metric to construct a multiple classifier system. The new hybrid feature selection process uses multiple classifier system as a wrapper method and merges it with iterated feature ranking filter method to further reduce features. Two endovascular aortic repair datasets containing 91% censored patients collected from two centers were used to construct a multicenter study to evaluate the performance of the proposed approach. The results showed the proposed technique outperformed individual classifiers and variable selection methods based on Cox's model such as Akaike and Bayesian information criterions and least absolute shrinkage and selector operator in p values of the log-rank test, sensitivity, and concordance index. This indicates that the proposed classifier is more powerful in correctly predicting the risk of re-intervention enabling doctor in selecting patients' future follow-up plan.

  2. Cosmic ray antimatter: Is it primary or secondary?

    NASA Technical Reports Server (NTRS)

    Stecker, F. W.; Protheroe, R. J.; Kazanas, D.

    1981-01-01

    The relative merits and difficulties of the primary and secondary origin hypotheses for the observed cosmic ray antiprotons, including the low energy measurement of Buffington, were examined. It is concluded that the cosmic ray antiproton data may be strong evidence for antimatter galaxies and baryon symmetric cosmology. The present antiproton data are consistent with a primary extragalactic component having antiproton/proton approximately equal to .0032 + or - 0.7.

  3. Distributed reacceleration of cosmic rays

    NASA Technical Reports Server (NTRS)

    Wandel, Amri; Eichler, David; Letaw, John R.; Silberberg, Rein; Tsao, C. H.

    1985-01-01

    A model is developed in which cosmic rays, in addition to their initial acceleration by a strong shock, are continuously reaccelerated while propagating through the Galaxy. The equations describing this acceleration scheme are solved analytically and numerically. Solutions for the spectra of primary and secondary cosmic rays are given in a closed analytic form, allowing a rapid search in parameter space for viable propagation models with distributed reeacceleration included. The observed boron-to-carbon ratio can be reproduced by the reacceleration theory over a range of escape parameters, some of them quite different from the standard leaky-box model. It is also shown that even a very modest amount of reacceleration by strong shocks causes the boron-to-carbon ratio to level off at sufficiently high energies.

  4. Covariate analysis of bivariate survival data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less

  5. The association between cinacalcet use and missed in-center hemodialysis treatment rate.

    PubMed

    Brunelli, Steven M; Sibbel, Scott; Dluzniewski, Paul J; Cooper, Kerry; Bensink, Mark E; Bradbury, Brian D

    2016-11-01

    Missed in-center hemodialysis treatments (MHT) are a general indicator of health status in hemodialysis patients. This analysis was conducted to estimate the association between cinacalcet use and MHT rate. We studied patients receiving hemodialysis and prescription benefits services from a large dialysis organization. Incident cinacalcet users were propensity score matched to controls on 31 demographic, clinical, and laboratory variables. We applied inverse probability (IP) of censoring and crossover weights to account for informative censoring. Weighted negative binomial modeling was used to estimate MHT rates and pooled logistics models were used to estimate the association between cinacalcet use and MHT. Baseline demographic and clinical variables included serum calcium, phosphorus, parathyroid hormone, and vitamin D use, and were balanced between 15,474 new cinacalcet users and 15,474 matched controls. In an analysis based on intention-to-treat principles, 40.8% of cinacalcet users and 46.5% of nonusers were censored. MHT rate was 13% lower among cinacalcet initiators versus controls: IP of censoring weighted incidence rate ratio was 0.87 (95% confidence interval [CI]: 0.84-0.90 p < 0.001). In analyses based on as-treated principles, 72.8% and 61.5% of cinacalcet users and nonusers, respectively, crossed over or were censored. MHT rate was 15% lower among cinacalcet initiators versus controls: IP of censoring/crossover weighted incidence rate ratio was 0.85 (95%CI: 0.82-0.87 p < 0.001). After controlling for indication and differential censoring, cinacalcet treatment was associated with lower MHT rates, which may reflect better health status. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Final Report for Dynamic Models for Causal Analysis of Panel Data. Approaches to the Censoring Problem in Analysis of Event Histories. Part III, Chapter 2.

    ERIC Educational Resources Information Center

    Tuma, Nancy Brandon; Hannan, Michael T.

    The document, part of a series of chapters described in SO 011 759, considers the problem of censoring in the analysis of event-histories (data on dated events, including dates of change from one qualitative state to another). Censoring refers to the lack of information on events that occur before or after the period for which data are available.…

  7. Is Support of Censoring Controversial Media Content for the Good of Others? Sexual Strategies and Support of Censoring Pro-Alcohol Advertising.

    PubMed

    Zhang, Jinguang

    2017-01-01

    At least in the United States, there are widespread concerns with advertising that encourages alcohol consumption, and previous research explains those concerns as aiming to protect others from the harm of excessive alcohol use. 1 Drawing on sexual strategies theory, we hypothesized that support of censoring pro-alcohol advertising is ultimately self-benefiting regardless of its altruistic effect at a proximate level. Excessive drinking positively correlates with having casual sex, and casual sex threatens monogamy, one of the major means with which people adopting a long-term sexual strategy increase their inclusive fitness. Then, one way for long-term strategists to protect monogamy, and thus their reproductive interest is to support censoring pro-alcohol advertising, thereby preventing others from becoming excessive drinkers (and consequently having casual sex) under media influence. Supporting this hypothesis, three studies consistently showed that restricted sociosexuality positively correlated with support of censoring pro-alcohol advertising before and after various value-, ideological-, and moral-foundation variables were controlled for. Also as predicted, Study 3 revealed a significant indirect effect of sociosexuality on censorship support through perceived media influence on others but not through perceived media influence on self. These findings further supported a self-interest analysis of issue opinions, extended third-person-effect research on support of censoring pro-alcohol advertising, and suggested a novel approach to analyzing media censorship support.

  8. A Novel Approach in the Weakly Interacting Massive Particle Quest: Cross-correlation of Gamma-Ray Anisotropies and Cosmic Shear

    NASA Astrophysics Data System (ADS)

    Camera, Stefano; Fornasa, Mattia; Fornengo, Nicolao; Regis, Marco

    2013-07-01

    Both cosmic shear and cosmological gamma-ray emission stem from the presence of dark matter (DM) in the universe: DM structures are responsible for the bending of light in the weak-lensing regime and those same objects can emit gamma rays, either because they host astrophysical sources (active galactic nuclei or star-forming galaxies) or directly by DM annihilations (or decays, depending on the properties of the DM particle). Such gamma rays should therefore exhibit strong correlation with the cosmic shear signal. In this Letter, we compute the cross-correlation angular power spectrum of cosmic shear and gamma rays produced by the annihilation/decay of weakly interacting massive particle DM, as well as by astrophysical sources. We show that this observable provides novel information on the composition of the extragalactic gamma-ray background (EGB), since the amplitude and shape of the cross-correlation signal strongly depend on which class of sources is responsible for the gamma-ray emission. If the DM contribution to the EGB is significant (at least in a definite energy range), although compatible with current observational bounds, its strong correlation with the cosmic shear makes such signal potentially detectable by combining Fermi Large Area Telescope data with forthcoming galaxy surveys, like the Dark Energy Survey and Euclid. At the same time, the same signal would demonstrate that the weak-lensing observables are indeed due to particle DM matter and not to possible modifications of general relativity.

  9. A semiparametric Bayesian proportional hazards model for interval censored data with frailty effects.

    PubMed

    Henschel, Volkmar; Engel, Jutta; Hölzel, Dieter; Mansmann, Ulrich

    2009-02-10

    Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty. MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework. Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN. The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.

  10. Model Calibration with Censored Data

    DOE PAGES

    Cao, Fang; Ba, Shan; Brenneman, William A.; ...

    2017-06-28

    Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less

  11. Statistical analysis tables for truncated or censored samples

    NASA Technical Reports Server (NTRS)

    Cohen, A. C.; Cooley, C. G.

    1971-01-01

    Compilation describes characteristics of truncated and censored samples, and presents six illustrations of practical use of tables in computing mean and variance estimates for normal distribution using selected samples.

  12. Nonparametric and Semiparametric Regression Estimation for Length-biased Survival Data

    PubMed Central

    Shen, Yu; Ning, Jing; Qin, Jing

    2016-01-01

    For the past several decades, nonparametric and semiparametric modeling for conventional right-censored survival data has been investigated intensively under a noninformative censoring mechanism. However, these methods may not be applicable for analyzing right-censored survival data that arise from prevalent cohorts when the failure times are subject to length-biased sampling. This review article is intended to provide a summary of some newly developed methods as well as established methods for analyzing length-biased data. PMID:27086362

  13. Constraints on cosmic superstrings from Kaluza-Klein emission.

    PubMed

    Dufaux, Jean-François

    2012-07-06

    Cosmic superstrings interact generically with a tower of light and/or strongly coupled Kaluza-Klein (KK) modes associated with the geometry of the internal space. We study the production of KK particles by cosmic superstring loops, and show that it is constrained by big bang nucleosynthesis. We study the resulting constraints in the parameter space of the underlying string theory model and highlight their complementarity with the regions that can be probed by current and upcoming gravitational wave experiments.

  14. Marginal regression analysis of recurrent events with coarsened censoring times.

    PubMed

    Hu, X Joan; Rosychuk, Rhonda J

    2016-12-01

    Motivated by an ongoing pediatric mental health care (PMHC) study, this article presents weakly structured methods for analyzing doubly censored recurrent event data where only coarsened information on censoring is available. The study extracted administrative records of emergency department visits from provincial health administrative databases. The available information of each individual subject is limited to a subject-specific time window determined up to concealed data. To evaluate time-dependent effect of exposures, we adapt the local linear estimation with right censored survival times under the Cox regression model with time-varying coefficients (cf. Cai and Sun, Scandinavian Journal of Statistics 2003, 30, 93-111). We establish the pointwise consistency and asymptotic normality of the regression parameter estimator, and examine its performance by simulation. The PMHC study illustrates the proposed approach throughout the article. © 2016, The International Biometric Society.

  15. Total-reflection X-ray fluorescence studies of trace elements in biomedical samples

    NASA Astrophysics Data System (ADS)

    Kubala-Kukuś, A.; Braziewicz, J.; Pajek, M.

    2004-08-01

    Application of the total-reflection X-ray fluorescence (TXRF) analysis in the studies of trace element contents in biomedical samples is discussed in the following aspects: (i) a nature of trace element concentration distributions, (ii) censoring approach to the detection limits, and (iii) a comparison of two sets of censored data. The paper summarizes the recent results achieved in this topics, in particular, the lognormal, or more general logstable, nature of concentration distribution of trace elements, the random left-censoring and the Kaplan-Meier approach accounting for detection limits and, finally, the application of the logrank test to compare the censored concentrations measured for two groups. These new aspects, which are of importance for applications of the TXRF in different fields, are discussed here in the context of TXRF studies of trace element in various samples of medical interest.

  16. Application of a Weighted Regression Model for Reporting Nutrient and Sediment Concentrations, Fluxes, and Trends in Concentration and Flux for the Chesapeake Bay Nontidal Water-Quality Monitoring Network, Results Through Water Year 2012

    USGS Publications Warehouse

    Chanat, Jeffrey G.; Moyer, Douglas L.; Blomquist, Joel D.; Hyer, Kenneth E.; Langland, Michael J.

    2016-01-13

    Inconsistencies related to changing laboratory methods were also examined via two manipulative experiments. In the first experiment, increasing and decreasing “stair-step” patterns of changes in censoring level, overall representing a factor-of-five change in the laboratory reporting limit, were artificially imposed on a 27-year record with no censoring and a period-of-record concentration trend of –68.4 percent. Trends estimated on the basis of the manipulated records were broadly similar to the original trend (–63.6 percent for decreasing censoring levels and –70.3 percent for increasing censoring levels), lending a degree of confidence that the survival regression routines upon which WRTDS is based are generally robust to data censoring. The second experiment considered an abrupt disappearance of low-concentration observations of total phosphorus, associated with a laboratory method change and not reflected through censoring, near the middle of a 28-year record. By process of elimination, an upward shift in the estimated flow-normalize concentration trend line around the same time was identified as a likely artifact resulting from the laboratory method change, although a contemporaneous change in watershed processes cannot be ruled out. Decisions as to how to treat records with potential sampling protocol or laboratory methods-related artifacts should be made on a case-by-case basis, and trend results should be appropriately qualified.

  17. Estimation of indirect effect when the mediator is a censored variable.

    PubMed

    Wang, Jian; Shete, Sanjay

    2017-01-01

    A mediation model explores the direct and indirect effects of an initial variable ( X) on an outcome variable ( Y) by including a mediator ( M). In many realistic scenarios, investigators observe censored data instead of the complete data. Current research in mediation analysis for censored data focuses mainly on censored outcomes, but not censored mediators. In this study, we proposed a strategy based on the accelerated failure time model and a multiple imputation approach. We adapted a measure of the indirect effect for the mediation model with a censored mediator, which can assess the indirect effect at both the group and individual levels. Based on simulation, we established the bias in the estimations of different paths (i.e. the effects of X on M [ a], of M on Y [ b] and of X on Y given mediator M [ c']) and indirect effects when analyzing the data using the existing approaches, including a naïve approach implemented in software such as Mplus, complete-case analysis, and the Tobit mediation model. We conducted simulation studies to investigate the performance of the proposed strategy compared to that of the existing approaches. The proposed strategy accurately estimates the coefficients of different paths, indirect effects and percentages of the total effects mediated. We applied these mediation approaches to the study of SNPs, age at menopause and fasting glucose levels. Our results indicate that there is no indirect effect of association between SNPs and fasting glucose level that is mediated through the age at menopause.

  18. Annotation, submission and screening of repetitive elements in Repbase: RepbaseSubmitter and Censor.

    PubMed

    Kohany, Oleksiy; Gentles, Andrew J; Hankus, Lukasz; Jurka, Jerzy

    2006-10-25

    Repbase is a reference database of eukaryotic repetitive DNA, which includes prototypic sequences of repeats and basic information described in annotations. Updating and maintenance of the database requires specialized tools, which we have created and made available for use with Repbase, and which may be useful as a template for other curated databases. We describe the software tools RepbaseSubmitter and Censor, which are designed to facilitate updating and screening the content of Repbase. RepbaseSubmitter is a java-based interface for formatting and annotating Repbase entries. It eliminates many common formatting errors, and automates actions such as calculation of sequence lengths and composition, thus facilitating curation of Repbase sequences. In addition, it has several features for predicting protein coding regions in sequences; searching and including Pubmed references in Repbase entries; and searching the NCBI taxonomy database for correct inclusion of species information and taxonomic position. Censor is a tool to rapidly identify repetitive elements by comparison to known repeats. It uses WU-BLAST for speed and sensitivity, and can conduct DNA-DNA, DNA-protein, or translated DNA-translated DNA searches of genomic sequence. Defragmented output includes a map of repeats present in the query sequence, with the options to report masked query sequence(s), repeat sequences found in the query, and alignments. Censor and RepbaseSubmitter are available as both web-based services and downloadable versions. They can be found at http://www.girinst.org/repbase/submission.html (RepbaseSubmitter) and http://www.girinst.org/censor/index.php (Censor).

  19. Acceleration and propagation of cosmic rays

    NASA Astrophysics Data System (ADS)

    Fransson, C.; Epstein, R. I.

    1980-11-01

    Two general categories of cosmic ray models are discussed, concomitant acceleration and propagation (CAP) models and sequential acceleration and propagation (SAP) models. These normally correspond to the cosmic rays being continuously accelerated in the interstellar medium or being rapidly produced by discrete sources or strong shock waves, respectively. For the CAP models it is found that the ratio of the predominantly secondary nuclei (Li + Be + B + N) to the predominantly primary nuclei (C + O) varies by less than a factor of 1.5 between 1 and 100 GeV per nucleon. This is at variance with current measurements. It thus appears that the evolution of cosmic rays is best described by SAP models.

  20. Ultra-heavy cosmic rays: Theoretical implications of recent observations

    NASA Technical Reports Server (NTRS)

    Blake, J. B.; Hainebach, K. L.; Schramm, D. N.; Anglin, J. D.

    1977-01-01

    Extreme ultraheavy cosmic ray observations (Z greater or equal 70) are compared with r-process models. A detailed cosmic ray propagation calculation is used to transform the calculated source distributions to those observed at the earth. The r-process production abundances are calculated using different mass formulae and beta-rate formulae; an empirical estimate based on the observed solar system abundances is used also. There is the continued strong indication of an r-process dominance in the extreme ultra-heavy cosmic rays. However it is shown that the observed high actinide/Pt ratio in the cosmic rays cannot be fit with the same r-process calculation which also fits the solar system material. This result suggests that the cosmic rays probably undergo some preferential acceleration in addition to the apparent general enrichment in heavy (r-process) material. As estimate also is made of the expected relative abundance of superheavy elements in the cosmic rays if the anomalous heavy xenon in carbonaceous chondrites is due to a fissioning superheavy element.

  1. Effect of the cosmological constant on the deflection angle by a rotating cosmic string

    NASA Astrophysics Data System (ADS)

    Jusufi, Kimet; Övgün, Ali

    2018-03-01

    We report the effect of the cosmological constant and the internal energy density of a cosmic string on the deflection angle of light in the spacetime of a rotating cosmic string with internal structure. We first revisit the deflection angle by a rotating cosmic string and then provide a generalization using the geodesic equations and the Gauss-Bonnet theorem. We show there is an agreement between the two methods when employing higher-order terms of the linear mass density of the cosmic string. By modifying the integration domain for the global conical topology, we resolve the inconsistency between these two methods previously reported in the literature. We show that the deflection angle is not affected by the rotation of the cosmic string; however, the cosmological constant Λ strongly affects the deflection angle, which generalizes the well-known result.

  2. Evidence for a scaling solution in cosmic-string evolution

    NASA Technical Reports Server (NTRS)

    Bennett, David P.; Bouchet, Francois R.

    1988-01-01

    Numerical simulations are used to study the most fundamental issue of cosmic-string evolution: the existence of a scaling solution. Strong evidence is found that a scaling solution does indeed exist. This justifies the main assumption on which the cosmic-string theories of galaxy formation is based. The main conclusion coincides with that of Albrecht and Turok (1985) but the results are not consistent with theirs. In fact, the results indicate that the details of string evolution are very different from the standard dogma.

  3. ΛGR Centennial: Cosmic Web in Dark Energy Background

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.

    The basic building blocks of the Cosmic Web are groups and clusters of galaxies, super-clusters (pancakes) and filaments embedded in the universal dark energy background. The background produces antigravity, and the antigravity effect is strong in groups, clusters and superclusters. Antigravity is very weak in filaments where matter (dark matter and baryons) produces gravity dominating in the filament internal dynamics. Gravity-antigravity interplay on the large scales is a grandiose phenomenon predicted by ΛGR theory and seen in modern observations of the Cosmic Web.

  4. Research participant compensation: A matter of statistical inference as well as ethics.

    PubMed

    Swanson, David M; Betensky, Rebecca A

    2015-11-01

    The ethics of compensation of research subjects for participation in clinical trials has been debated for years. One ethical issue of concern is variation among subjects in the level of compensation for identical treatments. Surprisingly, the impact of variation on the statistical inferences made from trial results has not been examined. We seek to identify how variation in compensation may influence any existing dependent censoring in clinical trials, thereby also influencing inference about the survival curve, hazard ratio, or other measures of treatment efficacy. In simulation studies, we consider a model for how compensation structure may influence the censoring model. Under existing dependent censoring, we estimate survival curves under different compensation structures and observe how these structures induce variability in the estimates. We show through this model that if the compensation structure affects the censoring model and dependent censoring is present, then variation in that structure induces variation in the estimates and affects the accuracy of estimation and inference on treatment efficacy. From the perspectives of both ethics and statistical inference, standardization and transparency in the compensation of participants in clinical trials are warranted. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  6. The Cosmological Evolution of Radio Sources with CENSORS

    NASA Technical Reports Server (NTRS)

    Brookes, Mairi; Best, Philip; Peacock, John; Dunlop, James; Rottgering, Huub

    2006-01-01

    The CENSORS survey, selected from the NVSS, has been followed up using EIS, K-band imaging and spectroscopic observations to produce a radio sample capable of probing the source density in the regime: z greater than 2.5. With a current spectroscopic completeness of 62%, CENSORS has been used in direct modeling of RLF evolution and in V/V(sub max) tests. There is evidence for a shallow decline in number density of source in the luminosity range 10(sup 26) - 10(sup 27)WHz(sup -1) at 1.4GHz.

  7. Dental Age Estimation (DAE): Data management for tooth development stages including the third molar. Appropriate censoring of Stage H, the final stage of tooth development.

    PubMed

    Roberts, Graham J; McDonald, Fraser; Andiappan, Manoharan; Lucas, Victoria S

    2015-11-01

    The final stage of dental development of third molars is usually helpful to indicate whether or not a subject is aged over 18 years. A complexity is that the final stage of development is unlimited in its upper border. Investigators usually select an inappropriate upper age limit or censor point for this tooth development stage. The literature was searched for appropriate data sets for dental age estimation and those that provided the count (n), the mean (x¯), and the standard deviation (sd) for each of the tooth development stages. The Demirjian G and Demirjian H were used for this study. Upper and lower limits of the Stage G and Stage H data were calculated limiting the data to plus or minus three standard deviations from the mean. The upper border of Stage H was limited by appropriate censoring at the maximum value for Stage G. The maximum age at attainment from published data, for Stage H, ranged from 22.60 years to 34.50 years. These data were explored to demonstrate how censoring provides an estimate for the correct maximum age for the final stage of Stage H as 21.64 years for UK Caucasians. This study shows that confining the data array of individual tooth developments stages to ± 3sd provides a reliable and logical way of censoring the data for tooth development stages with a Normal distribution of data. For Stage H this is inappropriate as it is unbounded in its upper limit. The use of a censored data array for Stage H using Percentile values is appropriate. This increases the reliability of using third molar Stage H alone to determine whether or not an individual is over 18 years old. For Stage H, individual ancestral groups should be censored using the same technique. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  8. Probabilistic PCA of censored data: accounting for uncertainties in the visualization of high-throughput single-cell qPCR data.

    PubMed

    Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J

    2014-07-01

    High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. © The Author 2014. Published by Oxford University Press. All rights reserved.

  9. Probabilistic PCA of censored data: accounting for uncertainties in the visualization of high-throughput single-cell qPCR data

    PubMed Central

    Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J.

    2014-01-01

    Motivation: High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. Results: We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. Availability and implementation: The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. Contact: fbuettner.phys@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24618470

  10. Revealing the Cosmic Web-dependent Halo Bias

    NASA Astrophysics Data System (ADS)

    Yang, Xiaohu; Zhang, Youcai; Lu, Tianhuan; Wang, Huiyuan; Shi, Feng; Tweed, Dylan; Li, Shijie; Luo, Wentao; Lu, Yi; Yang, Lei

    2017-10-01

    Halo bias is the one of the key ingredients of the halo models. It was shown at a given redshift to be only dependent, to the first order, on the halo mass. In this study, four types of cosmic web environments—clusters, filaments, sheets, and voids—are defined within a state-of-the-art high-resolution N-body simulation. Within these environments, we use both halo-dark matter cross correlation and halo-halo autocorrelation functions to probe the clustering properties of halos. The nature of the halo bias differs strongly between the four different cosmic web environments described here. With respect to the overall population, halos in clusters have significantly lower biases in the {10}11.0˜ {10}13.5 {h}-1 {M}⊙ mass range. In other environments, however, halos show extremely enhanced biases up to a factor 10 in voids for halos of mass ˜ {10}12.0 {h}-1 {M}⊙ . Such a strong cosmic web environment dependence in the halo bias may play an important role in future cosmological and galaxy formation studies. Within this cosmic web framework, the age dependency of halo bias is found to be only significant in clusters and filaments for relatively small halos ≲ {10}12.5 {h}-1 {M}⊙ .

  11. Analyzing survival curves at a fixed point in time for paired and clustered right-censored data

    PubMed Central

    Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De

    2018-01-01

    In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280

  12. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    PubMed

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  13. Protecting the Innocence of Youth: Moral Sanctity Values Underlie Censorship From Young Children.

    PubMed

    Anderson, Rajen A; Masicampo, E J

    2017-11-01

    Three studies examined the relationship between people's moral values (drawing on moral foundations theory) and their willingness to censor immoral acts from children. Results revealed that diverse moral values did not predict censorship judgments. It was not the case that participants who valued loyalty and authority, respectively, sought to censor depictions of disloyal and disobedient acts. Rather, censorship intentions were predicted by a single moral value-sanctity. The more people valued sanctity, the more willing they were to censor from children, regardless of the types of violations depicted (impurity, disloyalty, disobedience, etc.). Furthermore, people who valued sanctity objected to indecent exposure only to apparently innocent and pure children-those who were relatively young and who had not been previously exposed to immoral acts. These data suggest that sanctity, purity, and the preservation of innocence underlie intentions to censor from young children.

  14. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  15. A joint model of persistent human papillomavirus infection and cervical cancer risk: Implications for cervical cancer screening

    PubMed Central

    Katki, Hormuzd A.; Cheung, Li C.; Fetterman, Barbara; Castle, Philip E.; Sundaram, Rajeshwari

    2014-01-01

    Summary New cervical cancer screening guidelines in the US and many European countries recommend that women get tested for human papillomavirus (HPV). To inform decisions about screening intervals, we calculate the increase in precancer/cancer risk per year of continued HPV infection. However, both time to onset of precancer/cancer and time to HPV clearance are interval-censored, and onset of precancer/cancer strongly informatively censors HPV clearance. We analyze this bivariate informatively interval-censored data by developing a novel joint model for time to clearance of HPV and time to precancer/cancer using shared random-effects, where the estimated mean duration of each woman’s HPV infection is a covariate in the submodel for time to precancer/cancer. The model was fit to data on 9,553 HPV-positive/Pap-negative women undergoing cervical cancer screening at Kaiser Permanente Northern California, data that were pivotal to the development of US screening guidelines. We compare the implications for screening intervals of this joint model to those from population-average marginal models of precancer/cancer risk. In particular, after 2 years the marginal population-average precancer/cancer risk was 5%, suggesting a 2-year interval to control population-average risk at 5%. In contrast, the joint model reveals that almost all women exceeding 5% individual risk in 2 years also exceeded 5% in 1 year, suggesting that a 1-year interval is better to control individual risk at 5%. The example suggests that sophisticated risk models capable of predicting individual risk may have different implications than population-average risk models that are currently used for informing medical guideline development. PMID:26556961

  16. A joint model of persistent human papillomavirus infection and cervical cancer risk: Implications for cervical cancer screening.

    PubMed

    Katki, Hormuzd A; Cheung, Li C; Fetterman, Barbara; Castle, Philip E; Sundaram, Rajeshwari

    2015-10-01

    New cervical cancer screening guidelines in the US and many European countries recommend that women get tested for human papillomavirus (HPV). To inform decisions about screening intervals, we calculate the increase in precancer/cancer risk per year of continued HPV infection. However, both time to onset of precancer/cancer and time to HPV clearance are interval-censored, and onset of precancer/cancer strongly informatively censors HPV clearance. We analyze this bivariate informatively interval-censored data by developing a novel joint model for time to clearance of HPV and time to precancer/cancer using shared random-effects, where the estimated mean duration of each woman's HPV infection is a covariate in the submodel for time to precancer/cancer. The model was fit to data on 9,553 HPV-positive/Pap-negative women undergoing cervical cancer screening at Kaiser Permanente Northern California, data that were pivotal to the development of US screening guidelines. We compare the implications for screening intervals of this joint model to those from population-average marginal models of precancer/cancer risk. In particular, after 2 years the marginal population-average precancer/cancer risk was 5%, suggesting a 2-year interval to control population-average risk at 5%. In contrast, the joint model reveals that almost all women exceeding 5% individual risk in 2 years also exceeded 5% in 1 year, suggesting that a 1-year interval is better to control individual risk at 5%. The example suggests that sophisticated risk models capable of predicting individual risk may have different implications than population-average risk models that are currently used for informing medical guideline development.

  17. Ionization Processes in the Atmosphere of Titan (Research Note). III. Ionization by High-Z Nuclei Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Gronoff, G.; Mertens, C.; Lilensten, J.; Desorgher, L.; Fluckiger, E.; Velinov, P.

    2011-01-01

    Context. The Cassini-Huygens mission has revealed the importance of particle precipitation in the atmosphere of Titan thanks to in-situ measurements. These ionizing particles (electrons, protons, and cosmic rays) have a strong impact on the chemistry, hence must be modeled. Aims. We revisit our computation of ionization in the atmosphere of Titan by cosmic rays. The high-energy high-mass ions are taken into account to improve the precision of the calculation of the ion production profile. Methods. The Badhwahr and O Neill model for cosmic ray spectrum was adapted for the Titan model. We used the TransTitan model coupled with the Planetocosmics model to compute the ion production by cosmic rays. We compared the results with the NAIRAS/HZETRN ionization model used for the first time for a body that differs from the Earth. Results. The cosmic ray ionization is computed for five groups of cosmic rays, depending on their charge and mass: protons, alpha, Z = 8 (oxygen), Z = 14 (silicon), and Z = 26 (iron) nucleus. Protons and alpha particles ionize mainly at 65 km altitude, while the higher mass nucleons ionize at higher altitudes. Nevertheless, the ionization at higher altitude is insufficient to obscure the impact of Saturn s magnetosphere protons at a 500 km altitude. The ionization rate at the peak (altitude: 65 km, for all the different conditions) lies between 30 and 40/cu cm/s. Conclusions. These new computations show for the first time the importance of high Z cosmic rays on the ionization of the Titan atmosphere. The updated full ionization profile shape does not differ significantly from that found in our previous calculations (Paper I: Gronoff et al. 2009, 506, 955) but undergoes a strong increase in intensity below an altitude of 400 km, especially between 200 and 400 km altitude where alpha and heavier particles (in the cosmic ray spectrum) are responsible for 40% of the ionization. The comparison of several models of ionization and cosmic ray spectra (in intensity and composition) reassures us about the stability of the altitude of the ionization peak (65 km altitude) with respect to the solar activity.

  18. Estimation of distributional parameters for censored trace level water quality data: 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, Robert J.; Helsel, Dennis R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.

  19. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1986-02-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less

  20. Estimation of distributional parameters for censored trace-level water-quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1984-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less

  1. Lorentz invariance violation in the neutrino sector: a joint analysis from big bang nucleosynthesis and the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Dai, Wei-Ming; Guo, Zong-Kuan; Cai, Rong-Gen; Zhang, Yuan-Zhong

    2017-06-01

    We investigate constraints on Lorentz invariance violation in the neutrino sector from a joint analysis of big bang nucleosynthesis and the cosmic microwave background. The effect of Lorentz invariance violation during the epoch of big bang nucleosynthesis changes the predicted helium-4 abundance, which influences the power spectrum of the cosmic microwave background at the recombination epoch. In combination with the latest measurement of the primordial helium-4 abundance, the Planck 2015 data of the cosmic microwave background anisotropies give a strong constraint on the deformation parameter since adding the primordial helium measurement breaks the degeneracy between the deformation parameter and the physical dark matter density.

  2. The structure of cosmic ray shocks

    NASA Astrophysics Data System (ADS)

    Axford, W. I.; Leer, E.; McKenzie, J. F.

    1982-07-01

    The acceleration of cosmic rays by steady shock waves has been discussed in brief reports by Leer et al. (1976) and Axford et al. (1977). This paper presents a more extended version of this work. The energy transfer and the structure of the shock wave is discussed in detail, and it is shown that even for moderately strong shock waves most of the upstream energy flux in the background gas is transferred to the cosmic rays. This holds also when the upstream cosmic ray pressure is very small. For an intermediate Mach-number regime the overall shock structure is shown to consist of a smooth transition followed by a gas shock (cf. Drury and Voelk, 1980).

  3. Analysis of censored data.

    PubMed

    Lucijanic, Marko; Petrovecki, Mladen

    2012-01-01

    Analyzing events over time is often complicated by incomplete, or censored, observations. Special non-parametric statistical methods were developed to overcome difficulties in summarizing and comparing censored data. Life-table (actuarial) method and Kaplan-Meier method are described with an explanation of survival curves. For the didactic purpose authors prepared a workbook based on most widely used Kaplan-Meier method. It should help the reader understand how Kaplan-Meier method is conceptualized and how it can be used to obtain statistics and survival curves needed to completely describe a sample of patients. Log-rank test and hazard ratio are also discussed.

  4. A flexible model for multivariate interval-censored survival times with complex correlation structure.

    PubMed

    Falcaro, Milena; Pickles, Andrew

    2007-02-10

    We focus on the analysis of multivariate survival times with highly structured interdependency and subject to interval censoring. Such data are common in developmental genetics and genetic epidemiology. We propose a flexible mixed probit model that deals naturally with complex but uninformative censoring. The recorded ages of onset are treated as possibly censored ordinal outcomes with the interval censoring mechanism seen as arising from a coarsened measurement of a continuous variable observed as falling between subject-specific thresholds. This bypasses the requirement for the failure times to be observed as falling into non-overlapping intervals. The assumption of a normal age-of-onset distribution of the standard probit model is relaxed by embedding within it a multivariate Box-Cox transformation whose parameters are jointly estimated with the other parameters of the model. Complex decompositions of the underlying multivariate normal covariance matrix of the transformed ages of onset become possible. The new methodology is here applied to a multivariate study of the ages of first use of tobacco and first consumption of alcohol without parental permission in twins. The proposed model allows estimation of the genetic and environmental effects that are shared by both of these risk behaviours as well as those that are specific. 2006 John Wiley & Sons, Ltd.

  5. Comparison of dynamic treatment regimes via inverse probability weighting.

    PubMed

    Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M

    2006-03-01

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  6. Type-I cosmic-string network

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Takashi; Sendouda, Yuuiti; Takahashi, Keitaro; Yamauchi, Daisuke; Yoo, Chul-Moon

    2013-10-01

    We study the network of Type-I cosmic strings using the field-theoretic numerical simulations in the Abelian-Higgs model. For Type-I strings, the gauge field plays an important role, and thus we find that the correlation length of the strings is strongly dependent upon the parameter β, the ratio between the masses of the scalar field and the gauge field, namely, β=mφ2/mA2. In particular, if we take the cosmic expansion into account, the network becomes densest in the comoving box for a specific value of β for β<1.

  7. Impact of energetic cosmic-ray ions on astrophysical ice grains

    NASA Astrophysics Data System (ADS)

    Mainitz, Martin; Anders, Christian; Urbassek, Herbert M.

    2017-02-01

    Using molecular-dynamics simulation with REAX potentials, we study the consequences of cosmic-ray ion impact on ice grains. The grains are composed of a mixture of H2O, CO2, NH3, and CH3OH molecules. Due to the high energy deposition of the cosmic-ray ion, 5 keV/nm, a strong pressure wave runs through the grain, while the interior of the ion track gasifies. Abundant molecular dissociations occur; reactions of the fragments form a variety of novel molecular product species.

  8. Influence of the backreaction of streaming cosmic rays on magnetic field generation and thermal instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nekrasov, Anatoly K.; Shadmehri, Mohsen, E-mail: anekrasov@ifz.ru, E-mail: nekrasov.anatoly@gmail.com, E-mail: m.shadmehri@gu.ac.ir

    2014-06-10

    Using a multifluid approach, we investigate streaming and thermal instabilities of the electron-ion plasma with homogeneous cold cosmic rays propagating perpendicular to the background magnetic field. Perturbations are also considered to be across the magnetic field. The backreaction of cosmic rays resulting in strong streaming instabilities is taken into account. It is shown that, for sufficiently short wavelength perturbations, the growth rates can exceed the growth rate of cosmic-ray streaming instability along the magnetic field, found by Nekrasov and Shadmehri, which is in turn considerably larger than the growth rate of the Bell instability. The thermal instability is shown notmore » to be subject to the action of cosmic rays in the model under consideration. The dispersion relation for the thermal instability has been derived, which includes sound velocities of plasma and cosmic rays and Alfvén and cosmic-ray streaming velocities. The relation between these parameters determines the kind of thermal instability ranging from the Parker to the Field instabilities. The results obtained can be useful for a more detailed investigation of electron-ion astrophysical objects, such as supernova remnant shocks, galaxy clusters, and others, including the dynamics of streaming cosmic rays.« less

  9. Cosmic ray acceleration in magnetic circumstellar bubbles

    NASA Astrophysics Data System (ADS)

    Zirakashvili, V. N.; Ptuskin, V. S.

    2018-03-01

    We consider the diffusive shock acceleration in interstellar bubbles created by powerful stellar winds of supernova progenitors. Under the moderate stellar wind magnetization the bubbles are filled by the strongly magnetized low density gas. It is shown that the maximum energy of particles accelerated in this environment can exceed the "knee" energy in the observable cosmic ray spectrum.

  10. COSMIC DUST AGGREGATION WITH STOCHASTIC CHARGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Lorin S.; Hyde, Truell W.; Shotorban, Babak, E-mail: Lorin_Matthews@baylor.edu

    2013-10-20

    The coagulation of cosmic dust grains is a fundamental process which takes place in astrophysical environments, such as presolar nebulae and circumstellar and protoplanetary disks. Cosmic dust grains can become charged through interaction with their plasma environment or other processes, and the resultant electrostatic force between dust grains can strongly affect their coagulation rate. Since ions and electrons are collected on the surface of the dust grain at random time intervals, the electrical charge of a dust grain experiences stochastic fluctuations. In this study, a set of stochastic differential equations is developed to model these fluctuations over the surface ofmore » an irregularly shaped aggregate. Then, employing the data produced, the influence of the charge fluctuations on the coagulation process and the physical characteristics of the aggregates formed is examined. It is shown that dust with small charges (due to the small size of the dust grains or a tenuous plasma environment) is affected most strongly.« less

  11. An estimation of Canadian population exposure to cosmic rays.

    PubMed

    Chen, Jing; Timmins, Rachel; Verdecchia, Kyle; Sato, Tatsuhiko

    2009-08-01

    The worldwide average exposure to cosmic rays contributes to about 16% of the annual effective dose from natural radiation sources. At ground level, doses from cosmic ray exposure depend strongly on altitude, and weakly on geographical location and solar activity. With the analytical model PARMA developed by the Japan Atomic Energy Agency, annual effective doses due to cosmic ray exposure at ground level were calculated for more than 1,500 communities across Canada which cover more than 85% of the Canadian population. The annual effective doses from cosmic ray exposure in the year 2000 during solar maximum ranged from 0.27 to 0.72 mSv with the population-weighted national average of 0.30 mSv. For the year 2006 during solar minimum, the doses varied between 0.30 and 0.84 mSv, and the population-weighted national average was 0.33 mSv. Averaged over solar activity, the Canadian population-weighted average annual effective dose due to cosmic ray exposure at ground level is estimated to be 0.31 mSv.

  12. Censoring distances based on labeled cortical distance maps in cortical morphometry.

    PubMed

    Ceyhan, Elvan; Nishino, Tomoyuki; Alexopolous, Dimitrios; Todd, Richard D; Botteron, Kelly N; Miller, Michael I; Ratnanather, J Tilak

    2013-01-01

    It has been demonstrated that shape differences in cortical structures may be manifested in neuropsychiatric disorders. Such morphometric differences can be measured by labeled cortical distance mapping (LCDM) which characterizes the morphometry of the laminar cortical mantle of cortical structures. LCDM data consist of signed/labeled distances of gray matter (GM) voxels with respect to GM/white matter (WM) surface. Volumes and other summary measures for each subject and the pooled distances can help determine the morphometric differences between diagnostic groups, however they do not reveal all the morphometric information contained in LCDM distances. To extract more information from LCDM data, censoring of the pooled distances is introduced for each diagnostic group where the range of LCDM distances is partitioned at a fixed increment size; and at each censoring step, the distances not exceeding the censoring distance are kept. Censored LCDM distances inherit the advantages of the pooled distances but also provide information about the location of morphometric differences which cannot be obtained from the pooled distances. However, at each step, the censored distances aggregate, which might confound the results. The influence of data aggregation is investigated with an extensive Monte Carlo simulation analysis and it is demonstrated that this influence is negligible. As an illustrative example, GM of ventral medial prefrontal cortices (VMPFCs) of subjects with major depressive disorder (MDD), subjects at high risk (HR) of MDD, and healthy control (Ctrl) subjects are used. A significant reduction in laminar thickness of the VMPFC in MDD and HR subjects is observed compared to Ctrl subjects. Moreover, the GM LCDM distances (i.e., locations with respect to the GM/WM surface) for which these differences start to occur are determined. The methodology is also applicable to LCDM-based morphometric measures of other cortical structures affected by disease.

  13. Censorship and Junk Food Journalism.

    ERIC Educational Resources Information Center

    Jensen, Carl

    1984-01-01

    Discusses journalistic phenomenon whereby Americans are inundated with same news with only names, dates, and locations changing. Highlights include news explosion, well-documented news, why "Ten Most Censored Stories" chosen by Project Censored (Sonoma State University, California) are not covered by major news media, federal policies,…

  14. GSimp: A Gibbs sampler based left-censored missing value imputation approach for metabolomics studies

    PubMed Central

    Jia, Erik; Chen, Tianlu

    2018-01-01

    Left-censored missing values commonly exist in targeted metabolomics datasets and can be considered as missing not at random (MNAR). Improper data processing procedures for missing values will cause adverse impacts on subsequent statistical analyses. However, few imputation methods have been developed and applied to the situation of MNAR in the field of metabolomics. Thus, a practical left-censored missing value imputation method is urgently needed. We developed an iterative Gibbs sampler based left-censored missing value imputation approach (GSimp). We compared GSimp with other three imputation methods on two real-world targeted metabolomics datasets and one simulation dataset using our imputation evaluation pipeline. The results show that GSimp outperforms other imputation methods in terms of imputation accuracy, observation distribution, univariate and multivariate analyses, and statistical sensitivity. Additionally, a parallel version of GSimp was developed for dealing with large scale metabolomics datasets. The R code for GSimp, evaluation pipeline, tutorial, real-world and simulated targeted metabolomics datasets are available at: https://github.com/WandeRum/GSimp. PMID:29385130

  15. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    PubMed

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  16. The Automation System Censor Speech for the Indonesian Rude Swear Words Based on Support Vector Machine and Pitch Analysis

    NASA Astrophysics Data System (ADS)

    Endah, S. N.; Nugraheni, D. M. K.; Adhy, S.; Sutikno

    2017-04-01

    According to Law No. 32 of 2002 and the Indonesian Broadcasting Commission Regulation No. 02/P/KPI/12/2009 & No. 03/P/KPI/12/2009, stated that broadcast programs should not scold with harsh words, not harass, insult or demean minorities and marginalized groups. However, there are no suitable tools to censor those words automatically. Therefore, researches to develop a system of intelligent software to censor the words automatically are needed. To conduct censor, the system must be able to recognize the words in question. This research proposes the classification of speech divide into two classes using Support Vector Machine (SVM), first class is set of rude words and the second class is set of properly words. The speech pitch values as an input in SVM, it used for the development of the system for the Indonesian rude swear word. The results of the experiment show that SVM is good for this system.

  17. Lightning Discharges, Cosmic Rays and Climate

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjay; Siingh, Devendraa; Singh, R. P.; Singh, A. K.; Kamra, A. K.

    2018-03-01

    The entirety of the Earth's climate system is continuously bombarded by cosmic rays and exhibits about 2000 thunderstorms active at any time of the day all over the globe. Any linkage among these vast systems should have global consequences. Numerous studies done in the past deal with partial links between some selected aspects of this grand linkage. Results of these studies vary from weakly to strongly significant and are not yet complete enough to justify the physical mechanism proposed to explain such links. This review is aimed at presenting the current understanding, based on the past studies on the link between cosmic ray, lightning and climate. The deficiencies in some proposed links are pointed out. Impacts of cosmic rays on engineering systems and the possible effects of cosmic rays on human health are also briefly discussed. Also enumerated are some problems for future work which may help in developing the grand linkage among these three vast systems.

  18. Supernova Remnant Kes 17: An Efficient Cosmic Ray Accelerator inside a Molecular Cloud

    NASA Astrophysics Data System (ADS)

    Gelfand, Joseph D.; Castro, Daniel; Slane, Patrick O.; Temim, Tea; Hughes, John P.; Rakowski, Cara

    2013-11-01

    The supernova remnant Kes 17 (SNR G304.6+0.1) is one of a few but growing number of remnants detected across the electromagnetic spectrum. In this paper, we analyze recent radio, X-ray, and γ-ray observations of this object, determining that efficient cosmic ray acceleration is required to explain its broadband non-thermal spectrum. These observations also suggest that Kes 17 is expanding inside a molecular cloud, though our determination of its age depends on whether thermal conduction or clump evaporation is primarily responsible for its center-filled thermal X-ray morphology. Evidence for efficient cosmic ray acceleration in Kes 17 supports recent theoretical work concluding that the strong magnetic field, turbulence, and clumpy nature of molecular clouds enhance cosmic ray production in supernova remnants. While additional observations are needed to confirm this interpretation, further study of Kes 17 is important for understanding how cosmic rays are accelerated in supernova remnants.

  19. Cosmocultural Evolution: Cosmic Motivation for Interstellar Travel?

    NASA Astrophysics Data System (ADS)

    Lupisella, M.

    Motivations for interstellar travel can vary widely from practical survival motivations to wider-ranging moral obligations to future generations. But it may also be fruitful to explore what, if any, "cosmic" relevance there may be regarding interstellar travel. Cosmocultural evolution can be defined as the coevolution of cosmos and culture, with cultural evolution playing an important and perhaps critical role in the overall evolution of the universe. Strong versions of cosmocultural evolution might suggest that cultural evolution may have unlimited potential as a cosmic force. In such a worldview, the advancement of cultural beings throughout the universe could have significant cosmic relevance, perhaps providing additional motivation for interstellar travel. This paper will explore some potential philosophical and policy implications for interstellar travel of a cosmocultural evolutionary perspective and other related concepts, including some from a recent NASA book, Cosmos and Culture: Cultural Evolution in a Cosmic Context.

  20. The evolution of cosmic-ray-mediated magnetohydrodynamic shocks: A two-fluid approach

    NASA Astrophysics Data System (ADS)

    Jun, Byung-Il; Clarke, David A.; Norman, Michael L.

    1994-07-01

    We study the shock structure and acceleration efficiency of cosmic-ray mediated Magnetohydrodynamic (MHD) shocks both analytically and numerically by using a two-fluid model. Our model includes the dynamical effect of magnetic fields and cosmic rays on a background thermal fluid. The steady state solution is derived by following the technique of Drury & Voelk (1981) and compared to numerical results. We explore the time evolution of plane-perpendicular, piston-driven shocks. From the results of analytical and numerical studies, we conclude that the mean magnetic field plays an important role in the structure and acceleration efficiency of cosmic-ray mediated MHD shocks. The acceleration of cosmic-ray particles becomes less efficient in the presence of strong magnetic pressure since the field makes the shock less compressive. This feature is more prominent at low Mach numbers than at high Mach numbers.

  1. The evolution of cosmic-ray-mediated magnetohydrodynamic shocks: A two-fluid approach

    NASA Technical Reports Server (NTRS)

    Jun, Byung-Il; Clarke, David A.; Norman, Michael L.

    1994-01-01

    We study the shock structure and acceleration efficiency of cosmic-ray mediated Magnetohydrodynamic (MHD) shocks both analytically and numerically by using a two-fluid model. Our model includes the dynamical effect of magnetic fields and cosmic rays on a background thermal fluid. The steady state solution is derived by following the technique of Drury & Voelk (1981) and compared to numerical results. We explore the time evolution of plane-perpendicular, piston-driven shocks. From the results of analytical and numerical studies, we conclude that the mean magnetic field plays an important role in the structure and acceleration efficiency of cosmic-ray mediated MHD shocks. The acceleration of cosmic-ray particles becomes less efficient in the presence of strong magnetic pressure since the field makes the shock less compressive. This feature is more prominent at low Mach numbers than at high Mach numbers.

  2. Post-processing of multi-model ensemble river discharge forecasts using censored EMOS

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2014-05-01

    When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a daily basis over a period of three years. For the two catchments considered, this resulted in well calibrated and sharp forecast distributions over all lead-times from 1 to 114 h. Training observations tended to be better indicators for the dependence structure than the raw ensemble.

  3. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care.

    PubMed

    Kowalski, Amanda

    2016-01-02

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.

  4. Linear regression analysis of survival data with missing censoring indicators.

    PubMed

    Wang, Qihua; Dinse, Gregg E

    2011-04-01

    Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.

  5. Giving cosmic redshift drift a whirl

    NASA Astrophysics Data System (ADS)

    Kim, Alex G.; Linder, Eric V.; Edelstein, Jerry; Erskine, David

    2015-03-01

    Redshift drift provides a direct kinematic measurement of cosmic acceleration but it occurs with a characteristic time scale of a Hubble time. Thus redshift observations with a challenging precision of 10-9 require a 10 year time span to obtain a signal-to-noise of 1. We discuss theoretical and experimental approaches to address this challenge, potentially requiring less observer time and having greater immunity to common systematics. On the theoretical side we explore allowing the universe, rather than the observer, to provide long time spans; speculative methods include radial baryon acoustic oscillations, cosmic pulsars, and strongly lensed quasars. On the experimental side, we explore beating down the redshift precision using differential interferometric techniques, including externally dispersed interferometers and spatial heterodyne spectroscopy. Low-redshift emission line galaxies are identified as having high cosmology leverage and systematics control, with an 8 h exposure on a 10-m telescope (1000 h of exposure on a 40-m telescope) potentially capable of measuring the redshift of a galaxy to a precision of 10-8 (few ×10-10). Low-redshift redshift drift also has very strong complementarity with cosmic microwave background measurements, with the combination achieving a dark energy figure of merit of nearly 300 (1400) for 5% (1%) precision on drift.

  6. Quantile Regression with Censored Data

    ERIC Educational Resources Information Center

    Lin, Guixian

    2009-01-01

    The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…

  7. Toward improved analysis of concentration data: Embracing nondetects.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien

    2018-03-01

    Various statistical tests on concentration data serve to support decision-making regarding characterization and monitoring of contaminated media, assessing exposure to a chemical, and quantifying the associated risks. However, the routine statistical protocols cannot be directly applied because of challenges arising from nondetects or left-censored observations, which are concentration measurements below the detection limit of measuring instruments. Despite the existence of techniques based on survival analysis that can adjust for nondetects, these are seldom taken into account properly. A comprehensive review of the literature showed that managing policies regarding analysis of censored data do not always agree and that guidance from regulatory agencies may be outdated. Therefore, researchers and practitioners commonly resort to the most convenient way of tackling the censored data problem by substituting nondetects with arbitrary constants prior to data analysis, although this is generally regarded as a bias-prone approach. Hoping to improve the interpretation of concentration data, the present article aims to familiarize researchers in different disciplines with the significance of left-censored observations and provides theoretical and computational recommendations (under both frequentist and Bayesian frameworks) for adequate analysis of censored data. In particular, the present article synthesizes key findings from previous research with respect to 3 noteworthy aspects of inferential statistics: estimation of descriptive statistics, hypothesis testing, and regression analysis. Environ Toxicol Chem 2018;37:643-656. © 2017 SETAC. © 2017 SETAC.

  8. The dawn of the particle astronomy era in ultra-high-energy cosmic rays.

    PubMed

    Bauleo, Pablo M; Martino, Julio Rodríguez

    2009-04-16

    Cosmic rays are charged particles arriving at the Earth from space. Those at the highest energies are particularly interesting because the physical processes that could create or accelerate them are at the limit of our present knowledge. They also open the window to particle astronomy, as the magnetic fields along their paths are not strong enough to deflect their trajectories much from a straight line. The Pierre Auger Observatory is the largest cosmic-ray detector on Earth, and as such is beginning to resolve past observational disagreements regarding the origin and propagation of these particles.

  9. Monocular measurement of the spectrum of UHE cosmic rays by the FADC detector of the HiRes experiment

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Amman, J. F.; Archbold, G. C.; Bellido, J. A.; Belov, K.; Belz, J. W.; Bergman, D. R.; Cao, Z.; Clay, R. W.; Cooper, M. D.; Dai, H.; Dawson, B. R.; Everett, A. A.; Girard, J. H. V.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hüntemeyer, P.; Jones, B. F.; Jui, C. C. H.; Kieda, D. B.; Kim, K.; Kirn, M. A.; Loh, E. C.; Manago, N.; Marek, L. J.; Martens, K.; Martin, G.; Manago, N.; Matthews, J. A. J.; Matthews, J. N.; Meyer, J. R.; Moore, S. A.; Morrison, P.; Moosman, A. N.; Mumford, J. R.; Munro, M. W.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Sarracino, J. S.; Schnetzer, S.; Shen, P.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, S. B.; Thompson, T. N.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; VanderVeen, T. D.; Zech, A.; Zhang, X.

    2005-03-01

    We have measured the spectrum of UHE cosmic rays using the Flash ADC (FADC) detector (called HiRes-II) of the High Resolution Fly's Eye experiment running in monocular mode. We describe in detail the data analysis, development of the Monte Carlo simulation program, and results. We also describe the results of the HiRes-I detector. We present our measured spectra and compare them with a model incorporating galactic and extragalactic cosmic rays. Our combined spectra provide strong evidence for the existence of the spectral feature known as the "ankle."

  10. Box-Cox transformation of left-censored data with application to the analysis of coronary artery calcification and pharmacokinetic data.

    PubMed

    Han, Cong; Kronmal, Richard

    2004-12-15

    Box-Cox transformation is investigated for regression models for left-censored data. Examples are provided using coronary calcification data from the Multi-Ethnic Study of Atherosclerosis and pharmacokinetic data of a nicotine nasal spray. Copyright 2004 John Wiley & Sons, Ltd.

  11. Beware! Here There Be Beasties: Responding to Fundamentalist Censors.

    ERIC Educational Resources Information Center

    Traw, Rick

    1996-01-01

    Describes a heated censorship controversy experienced in 1990 in the Sioux Falls, South Dakota, school district brought by fundamentalist censors against the "Impressions" reading series. Explores specific categories of complaints, such as the supernatural, folktales, and myths. Notes the influence of religion and racism. Includes an addendum of…

  12. LIMITATIONS ON THE USES OF MULTIMEDIA EXPOSURE MEASUREMENTS FOR MULTIPATHWAY EXPOSURE ASSESSMENT - PART I: HANDLING OBSERVATIONS BELOW DETECTION LIMITS

    EPA Science Inventory

    Multimedia data from two probability-based exposure studies were investigated in terms of how censoring of non-detects affected estimation of population parameters and associations. Appropriate methods for handling censored below-detection-limit (BDL) values in this context were...

  13. James Moffett's Mistake: Ignoring the Rational Capacities of the Other

    ERIC Educational Resources Information Center

    Donehower, Kim

    2013-01-01

    Using Alasdair MacIntyre's theory of tradition-bound rationalities, this essay analyses James Moffett's depiction of the censors who opposed his "Interactions" textbook series in the Kanawha County, West Virginia, schools. Many reviewers have found Moffett's analysis of the censors in "Storm in the Mountains" even-handed and…

  14. Anatomy of the First Amendment and a Look at Its Interpretation.

    ERIC Educational Resources Information Center

    Otto, Jean H.

    1990-01-01

    Dissects features of the First Amendment, concentrating on freedom of religion, speech, and press clauses. Highlights the Hazelwood School District v. Kuhlmeier case and its reverberations. Argues that, when school officials censor, students learn that government may censor. Suggests censorship is counterproductive to schools' mission to promote…

  15. Model structure of a cosmic-ray mediated stellar or solar wind

    NASA Technical Reports Server (NTRS)

    Lee, M. A.; Axford, W. I.

    1988-01-01

    An idealized hydrodynamic model is presented for the mediation of a free-streaming stellar wind by galactic cosmic rays or energetic particles accelerated at the stellar wind termination shock. The spherically-symmetric stellar wind is taken to be cold; the only body force is the cosmic ray pressure gradient. The cosmic rays are treated as a massless fluid with an effective mean diffusion coefficient k proportional to radial distance r. The structure of the governing equations is investigated both analytically and numerically. Solutions for a range of values of k are presented which describe the deceleration of the stellar wind and a transition to nearly incompressible flow and constant cosmic ray pressure at large r. In the limit of small k the transition steepens to a strong stellar wind termination shock. For large k the stellar wind is decelerated gradually with no shock transition. It is argued that the solutions provide a simple model for the mediation of the solar wind by interstellar ions as both pickup ions and the cosmic ray anomalous component which together dominate the pressure of the solar wind at large r.

  16. The Evolving Universe: Structure and Evolution of the Universe Roadmap 2000-2020

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Roadmap for the Structure and Evolution of the Universe (SEU) theme embraces three fundamental, scientific quests: (1) To explain structure in the Universe and forecast our cosmic destiny. (2) To explore the cycles of matter and energy in the evolving Universe. (3) To examine the ultimate limits of gravity and energy in the Universe. These quests are developed into six, focused research campaigns addressing the objectives of one or more quests: Identify dark matter and learn how it shapes galaxies and systems of galaxies; Find out where and when the chemical elements were made; Understand the cycles in which matter, energy, and magnetic field are exchanged between stars and the gas between stars; Discover how gas flows in disks and how cosmic jets are formed; Identify the sources of gamma-ray bursts and high-energy cosmic rays; and Measure how strong gravity operates near black holes and how it affects the early Universe. These campaigns lead to a portfolio of future major missions of strong scientific and popular appeal, strongly endorsed by the scientific community and which has undergone significant initial study. Some of these missions are in a state of readiness that makes ideal candidates for the present Office of Space Science Strategic Plan; others may well feature in the next Strategic Plan. Each provides a golden scientific opportunity to advance our understanding of the Universe. Our highest priority science objectives are addressed by five Observatory Class Missions, unranked by science, but in approximate order of readiness: A high-energy gamma-ray facility that will observe relativistic jets and study the sources of cosmic gamma ray bursts; An ultra-sensitive X-ray telescope, optimized for spectroscopy, to examine the hot gas linked with clusters of galaxies, the disks around black holes, and supernova explosions; A large, radio telescope in deep space to map central regions of distant quasars and perform astrometric investigations; An orbiting gravitational coalescing, massive black holes and test how gravity waves distort spacetime; A pair of Earth-orbiting, optical telescopes that will detect flashes of light produced when ultra high-energy cosmic rays impact the upper atmosphere so as to determine their arrival directions and energies. A new program for supporting pertinent international collaboration is strongly endorsed and maintaining a strong Explorer program is important. The flexibility to exploit exceptional opportunities, such as attaching payloads to space station, should also be acquired. A strong technology development program must be initiated now to enable this mission set.

  17. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care

    PubMed Central

    Kowalski, Amanda

    2015-01-01

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from −0.76 to −1.49, which are an order of magnitude larger than previous estimates. PMID:26977117

  18. Solar cosmic rays as a specific source of radiation risk during piloted space flight.

    PubMed

    Petrov, V M

    2004-01-01

    Solar cosmic rays present one of several radiation sources that are unique to space flight. Under ground conditions the exposure to individuals has a controlled form and radiation risk occurs as stochastic radiobiological effects. Existence of solar cosmic rays in space leads to a stochastic mode of radiation environment as a result of which any radiobiological consequences of exposure to solar cosmic rays during the flight will be probabilistic values. In this case, the hazard of deterministic effects should also be expressed in radiation risk values. The main deterministic effect under space conditions is radiation sickness. The best dosimetric functional for its analysis is the blood forming organs dose equivalent but not an effective dose. In addition, the repair processes in red bone marrow affect strongly on the manifestation of this pathology and they must be taken into account for radiation risk assessment. A method for taking into account the mentioned above peculiarities for the solar cosmic rays radiation risk assessment during the interplanetary flights is given in the report. It is shown that radiation risk of deterministic effects defined, as the death probability caused by radiation sickness due to acute solar cosmic rays exposure, can be comparable to risk of stochastic effects. Its value decreases strongly because of the fractional mode of exposure during the orbital movement of the spacecraft. On the contrary, during the interplanetary flight, radiation risk of deterministic effects increases significantly because of the residual component of the blood forming organs dose from previous solar proton events. The noted quality of radiation responses must be taken into account for estimating radiation hazard in space. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  19. Test of the FLRW Metric and Curvature with Strong Lens Time Delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Kai; Li, Zhengxiang; Wang, Guo-Jian

    We present a new model-independent strategy for testing the Friedmann–Lemaître–Robertson–Walker (FLRW) metric and constraining cosmic curvature, based on future time-delay measurements of strongly lensed quasar-elliptical galaxy systems from the Large Synoptic Survey Telescope and supernova observations from the Dark Energy Survey. The test only relies on geometric optics. It is independent of the energy contents of the universe and the validity of the Einstein equation on cosmological scales. The study comprises two levels: testing the FLRW metric through the distance sum rule (DSR) and determining/constraining cosmic curvature. We propose an effective and efficient (redshift) evolution model for performing the formermore » test, which allows us to concretely specify the violation criterion for the FLRW DSR. If the FLRW metric is consistent with the observations, then on the second level the cosmic curvature parameter will be constrained to ∼0.057 or ∼0.041 (1 σ ), depending on the availability of high-redshift supernovae, which is much more stringent than current model-independent techniques. We also show that the bias in the time-delay method might be well controlled, leading to robust results. The proposed method is a new independent tool for both testing the fundamental assumptions of homogeneity and isotropy in cosmology and for determining cosmic curvature. It is complementary to cosmic microwave background plus baryon acoustic oscillation analyses, which normally assume a cosmological model with dark energy domination in the late-time universe.« less

  20. How the Mind of a Censor Works: The Psychology of Censorship.

    ERIC Educational Resources Information Center

    Fine, Sara

    1996-01-01

    Explores censorship and examines it as a human dynamic. Discusses the authoritarian personality, the need to control, traditionalism and the need to belong to a group, the influence of family, denial, and authoritarian women. Describes the importance of listening to "the Censor" in order to encourage dialogue and how to use effective…

  1. Teachers Making Decisions When We Know the Censors Are Watching.

    ERIC Educational Resources Information Center

    Napier, Minta

    Attempts to suppress and even censor various texts used by English teachers often are led by members of fundamentalist Christian groups. These activists charge educators with depreciating Christian moral values and instigating a religion of "secular humanism" in the schools. Various examples of recent legal cases show how prominent the…

  2. "Tropic of Cancer" and the Censors: A Case Study and Bibliographic Guide to the Literature.

    ERIC Educational Resources Information Center

    Kincaid, Larry; Koger, Grove

    1997-01-01

    Traces the history of Henry Miller's novel "Tropic of Cancer"--censored in England and America for being too obscene--from its inception in 1932 to its vindication by the United States judicial system 30 years later. Also includes an annotated bibliography of related literature. (AEF)

  3. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR TREATMENT OF CENSORED DATA (IIT-A-4.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures undertaken to treat censored data which are below detection limits. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University of Arizona NHEXAS and Battelle Laboratorie...

  4. White Racism/Black Signs: Censorship and Images of Race Relations.

    ERIC Educational Resources Information Center

    Patton, Cindy

    1995-01-01

    Discusses the simultaneous establishment of legal rights to censor film and proscriptions on particular racial representations. Describes several changes in the Hays Code that demonstrate a change in the censor's theory of the image. Suggests that these changes substituted the censorship of race-related images with a new prohibition on racial…

  5. Obscenity, Profanity and the High School Press.

    ERIC Educational Resources Information Center

    Hansen, Kent A.

    1979-01-01

    School officials cannot censor or punish profanity and vulgarity in student publications without a showing that such action is essential for the maintenance of order and discipline or protects the rights of others or that the censored material satisfies the legal tests of obscenity. Available from Willamette University College of Law, Salem, OR…

  6. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  7. A flexible cure rate model with dependent censoring and a known cure threshold.

    PubMed

    Bernhardt, Paul W

    2016-11-10

    We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. MODELING LEFT-TRUNCATED AND RIGHT-CENSORED SURVIVAL DATA WITH LONGITUDINAL COVARIATES

    PubMed Central

    Su, Yu-Ru; Wang, Jane-Ling

    2018-01-01

    There is a surge in medical follow-up studies that include longitudinal covariates in the modeling of survival data. So far, the focus has been largely on right censored survival data. We consider survival data that are subject to both left truncation and right censoring. Left truncation is well known to produce biased sample. The sampling bias issue has been resolved in the literature for the case which involves baseline or time-varying covariates that are observable. The problem remains open however for the important case where longitudinal covariates are present in survival models. A joint likelihood approach has been shown in the literature to provide an effective way to overcome those difficulties for right censored data, but this approach faces substantial additional challenges in the presence of left truncation. Here we thus propose an alternative likelihood to overcome these difficulties and show that the regression coefficient in the survival component can be estimated unbiasedly and efficiently. Issues about the bias for the longitudinal component are discussed. The new approach is illustrated numerically through simulations and data from a multi-center AIDS cohort study. PMID:29479122

  9. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  10. Cosmic rays and the electric field of thunderclouds: Evidence for acceleration of particles (runaway electrons)

    NASA Astrophysics Data System (ADS)

    Khaerdinov, N. S.; Lidvansky, A. S.; Petkov, V. B.

    2005-07-01

    We present the data on correlations of the intensity of the soft component of cosmic rays with the local electric field of the near-earth atmosphere during thunderstorm periods at the Baksan Valley (North Caucasus, 1700 m a.s.l.). The large-area array for studying the extensive air showers of cosmic rays is used as a particle detector. An electric field meter of the 'electric mill' type (rain-protected) is mounted on the roof of the building in the center of this array. The data were obtained in the summer seasons of 2000-2002. We observe strong enhancements of the soft component intensity before some lightning strokes. At the same time, the analysis of the regression curve 'intensity versus field' discovers a bump at the field sign that is opposite to the field sign corresponding to acceleration of electrons. It is interpreted as a signature of runaway electrons from the region of the strong field (with opposite sign) overhead.

  11. Generation of mesoscale magnetic fields and the dynamics of Cosmic Ray acceleration

    NASA Astrophysics Data System (ADS)

    Diamond, P. H.; Malkov, M. A.

    The problem of the cosmic ray origin is discussed in connection with their acceleration in supernova remnant shocks. The diffusive shock acceleration mechanism is reviewed and its potential to accelerate particles to the maximum energy of (presumably) galactic cosmic rays (1018eV ) is considered. It is argued that to reach such energies, a strong magnetic field at scales larger than the particle gyroradius must be created as a result of the acceleration process, itself. One specific mechanism suggested here is based on the generation of Alfven wave at the gyroradius scale with a subsequent transfer to longer scales via interaction with strong acoustic turbulence in the shock precursor. The acoustic turbulence in turn, may be generated by Drury instability or by parametric instability of the Alfven waves. The generation mechanism is modulational instability of CR generated Alfven wave packets induced, in turn, by scattering off acoustic fluctuations in the shock precursor which are generated by Drury instability.

  12. Cox model with interval-censored covariate in cohort studies.

    PubMed

    Ahn, Soohyun; Lim, Johan; Paik, Myunghee Cho; Sacco, Ralph L; Elkind, Mitchell S

    2018-05-18

    In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow-up. The status of the secondary event serves as a time-varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time-varying covariates. While information on a typical time-varying covariate is missing for entire follow-up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval-censored. One may view interval-censored covariate of the secondary event status as missing time-varying covariates, yet missingness is partial since partial information is provided throughout the follow-up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval-censored covariates in the Cox proportional hazards model, we propose an available-data estimator, a doubly robust-type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Fast genomic predictions via Bayesian G-BLUP and multilocus models of threshold traits including censored Gaussian data.

    PubMed

    Kärkkäinen, Hanni P; Sillanpää, Mikko J

    2013-09-04

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.

  14. Fast Genomic Predictions via Bayesian G-BLUP and Multilocus Models of Threshold Traits Including Censored Gaussian Data

    PubMed Central

    Kärkkäinen, Hanni P.; Sillanpää, Mikko J.

    2013-01-01

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618

  15. Techniques for estimating health care costs with censored data: an overview for the health services researcher

    PubMed Central

    Wijeysundera, Harindra C; Wang, Xuesong; Tomlinson, George; Ko, Dennis T; Krahn, Murray D

    2012-01-01

    Objective The aim of this study was to review statistical techniques for estimating the mean population cost using health care cost data that, because of the inability to achieve complete follow-up until death, are right censored. The target audience is health service researchers without an advanced statistical background. Methods Data were sourced from longitudinal heart failure costs from Ontario, Canada, and administrative databases were used for estimating costs. The dataset consisted of 43,888 patients, with follow-up periods ranging from 1 to 1538 days (mean 576 days). The study was designed so that mean health care costs over 1080 days of follow-up were calculated using naïve estimators such as full-sample and uncensored case estimators. Reweighted estimators – specifically, the inverse probability weighted estimator – were calculated, as was phase-based costing. Costs were adjusted to 2008 Canadian dollars using the Bank of Canada consumer price index (http://www.bankofcanada.ca/en/cpi.html). Results Over the restricted follow-up of 1080 days, 32% of patients were censored. The full-sample estimator was found to underestimate mean cost ($30,420) compared with the reweighted estimators ($36,490). The phase-based costing estimate of $37,237 was similar to that of the simple reweighted estimator. Conclusion The authors recommend against the use of full-sample or uncensored case estimators when censored data are present. In the presence of heavy censoring, phase-based costing is an attractive alternative approach. PMID:22719214

  16. Bivariate Left-Censored Bayesian Model for Predicting Exposure: Preliminary Analysis of Worker Exposure during the Deepwater Horizon Oil Spill.

    PubMed

    Groth, Caroline; Banerjee, Sudipto; Ramachandran, Gurumurthy; Stenzel, Mark R; Sandler, Dale P; Blair, Aaron; Engel, Lawrence S; Kwok, Richard K; Stewart, Patricia A

    2017-01-01

    In April 2010, the Deepwater Horizon oil rig caught fire and exploded, releasing almost 5 million barrels of oil into the Gulf of Mexico over the ensuing 3 months. Thousands of oil spill workers participated in the spill response and clean-up efforts. The GuLF STUDY being conducted by the National Institute of Environmental Health Sciences is an epidemiological study to investigate potential adverse health effects among these oil spill clean-up workers. Many volatile chemicals were released from the oil into the air, including total hydrocarbons (THC), which is a composite of the volatile components of oil including benzene, toluene, ethylbenzene, xylene, and hexane (BTEXH). Our goal is to estimate exposure levels to these toxic chemicals for groups of oil spill workers in the study (hereafter called exposure groups, EGs) with likely comparable exposure distributions. A large number of air measurements were collected, but many EGs are characterized by datasets with a large percentage of censored measurements (below the analytic methods' limits of detection) and/or a limited number of measurements. We use THC for which there was less censoring to develop predictive linear models for specific BTEXH air exposures with higher degrees of censoring. We present a novel Bayesian hierarchical linear model that allows us to predict, for different EGs simultaneously, exposure levels of a second chemical while accounting for censoring in both THC and the chemical of interest. We illustrate the methodology by estimating exposure levels for several EGs on the Development Driller III, a rig vessel charged with drilling one of the relief wells. The model provided credible estimates in this example for geometric means, arithmetic means, variances, correlations, and regression coefficients for each group. This approach should be considered when estimating exposures in situations when multiple chemicals are correlated and have varying degrees of censoring. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  17. Review and evaluation of performance measures for survival prediction models in external validation settings.

    PubMed

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics of the validation data such as the level of censoring and the distribution of the prognostic index derived in the validation setting before choosing the performance measures.

  18. Censoring: a new approach for detection limits in total-reflection X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Pajek, M.; Kubala-Kukuś, A.; Braziewicz, J.

    2004-08-01

    It is shown that the detection limits in the total-reflection X-ray fluorescence (TXRF), which restrict quantification of very low concentrations of trace elements in the samples, can be accounted for using the statistical concept of censoring. We demonstrate that the incomplete TXRF measurements containing the so-called "nondetects", i.e. the non-measured concentrations falling below the detection limits and represented by the estimated detection limit values, can be viewed as the left random-censored data, which can be further analyzed using the Kaplan-Meier (KM) method correcting for nondetects. Within this approach, which uses the Kaplan-Meier product-limit estimator to obtain the cumulative distribution function corrected for the nondetects, the mean value and median of the detection limit censored concentrations can be estimated in a non-parametric way. The Monte Carlo simulations performed show that the Kaplan-Meier approach yields highly accurate estimates for the mean and median concentrations, being within a few percent with respect to the simulated, uncensored data. This means that the uncertainties of KM estimated mean value and median are limited in fact only by the number of studied samples and not by the applied correction procedure for nondetects itself. On the other hand, it is observed that, in case when the concentration of a given element is not measured in all the samples, simple approaches to estimate a mean concentration value from the data yield erroneous, systematically biased results. The discussed random-left censoring approach was applied to analyze the TXRF detection-limit-censored concentration measurements of trace elements in biomedical samples. We emphasize that the Kaplan-Meier approach allows one to estimate the mean concentrations being substantially below the mean level of detection limits. Consequently, this approach gives a new access to lower the effective detection limits for TXRF method, which is of prime interest for investigation of metallic impurities on the silicon wafers.

  19. Feature selection through validation and un-censoring of endovascular repair survival data for predicting the risk of re-intervention.

    PubMed

    Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter J E; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong

    2017-08-03

    Feature selection (FS) process is essential in the medical area as it reduces the effort and time needed for physicians to measure unnecessary features. Choosing useful variables is a difficult task with the presence of censoring which is the unique characteristic in survival analysis. Most survival FS methods depend on Cox's proportional hazard model; however, machine learning techniques (MLT) are preferred but not commonly used due to censoring. Techniques that have been proposed to adopt MLT to perform FS with survival data cannot be used with the high level of censoring. The researcher's previous publications proposed a technique to deal with the high level of censoring. It also used existing FS techniques to reduce dataset dimension. However, in this paper a new FS technique was proposed and combined with feature transformation and the proposed uncensoring approaches to select a reduced set of features and produce a stable predictive model. In this paper, a FS technique based on artificial neural network (ANN) MLT is proposed to deal with highly censored Endovascular Aortic Repair (EVAR). Survival data EVAR datasets were collected during 2004 to 2010 from two vascular centers in order to produce a final stable model. They contain almost 91% of censored patients. The proposed approach used a wrapper FS method with ANN to select a reduced subset of features that predict the risk of EVAR re-intervention after 5 years to patients from two different centers located in the United Kingdom, to allow it to be potentially applied to cross-centers predictions. The proposed model is compared with the two popular FS techniques; Akaike and Bayesian information criteria (AIC, BIC) that are used with Cox's model. The final model outperforms other methods in distinguishing the high and low risk groups; as they both have concordance index and estimated AUC better than the Cox's model based on AIC, BIC, Lasso, and SCAD approaches. These models have p-values lower than 0.05, meaning that patients with different risk groups can be separated significantly and those who would need re-intervention can be correctly predicted. The proposed approach will save time and effort made by physicians to collect unnecessary variables. The final reduced model was able to predict the long-term risk of aortic complications after EVAR. This predictive model can help clinicians decide patients' future observation plan.

  20. Accounting for Cosmic Variance in Studies of Gravitationally Lensed High-redshift Galaxies in the Hubble Frontier Field Clusters

    NASA Astrophysics Data System (ADS)

    Robertson, Brant E.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; Stark, Dan P.; McLeod, Derek

    2014-12-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ~35% at redshift z ~ 7 to >~ 65% at z ~ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.

  1. The Structure of the Local Universe and the Coldness of the Cosmic Flow

    NASA Astrophysics Data System (ADS)

    van de Weygaert, R.; Hoffman, Y.

    Unlike the substantial coherent bulk motion in which our local patch of the Cosmos is participating, the amplitude of the random motions around this large scale flow seems to be surprisingly low. Attempts to invoke global explanations to account for this coldness of the local cosmic velocity field have not yet been succesfull. Here we propose a different view on this cosmic dilemma, stressing the repercussions of our cosmic neighbourhood embodying a rather uncharacteristic region of the Cosmos. Suspended between two huge mass concentrations, the Great Attractor region and the Perseus-Pisces chain, we find ourselves in a region of relatively low density yet with a very strong tidal shear. By means of constrained realizations of our local Universe, based on Wiener-filtered reconstructions inferred from the Mark III catalogue of galaxy peculiar velocities, we show that indeed this configuration may induce locally cold regions. Hence, the coldness of the local flow may be a cosmic variance effect.

  2. Enhancing the Spectral Hardening of Cosmic TeV Photons by Mixing with Axionlike Particles in the Magnetized Cosmic Web.

    PubMed

    Montanino, Daniele; Vazza, Franco; Mirizzi, Alessandro; Viel, Matteo

    2017-09-08

    Large-scale extragalactic magnetic fields may induce conversions between very-high-energy photons and axionlike particles (ALPs), thereby shielding the photons from absorption on the extragalactic background light. However, in simplified "cell" models, used so far to represent extragalactic magnetic fields, this mechanism would be strongly suppressed by current astrophysical bounds. Here we consider a recent model of extragalactic magnetic fields obtained from large-scale cosmological simulations. Such simulated magnetic fields would have large enhancement in the filaments of matter. As a result, photon-ALP conversions would produce a significant spectral hardening for cosmic TeV photons. This effect would be probed with the upcoming Cherenkov Telescope Array detector. This possible detection would give a unique chance to perform a tomography of the magnetized cosmic web with ALPs.

  3. A cosmic book. [of physics of early universe

    NASA Technical Reports Server (NTRS)

    Peebles, P. J. E.; Silk, Joseph

    1988-01-01

    A system of assigning odds to the basic elements of cosmological theories is proposed in order to evaluate the strengths and weaknesses of the theories. A figure of merit for the theories is obtained by counting and weighing the plausibility of each of the basic elements that is not substantially supported by observation or mature fundamental theory. The magnetized strong model is found to be the most probable. In order of decreasing probability, the ranking for the rest of the models is: (1) the magnetized string model with no exotic matter and the baryon adiabatic model; (2) the hot dark matter model and the model of cosmic string loops; (3) the canonical cold dark matter model, the cosmic string loops model with hot dark matter, and the baryonic isocurvature model; and (4) the cosmic string loops model with no exotic matter.

  4. Constraining heavy dark matter with cosmic-ray antiprotons

    NASA Astrophysics Data System (ADS)

    Cuoco, Alessandro; Heisig, Jan; Korsmeier, Michael; Krämer, Michael

    2018-04-01

    Cosmic-ray observations provide a powerful probe of dark matter annihilation in the Galaxy. In this paper we derive constraints on heavy dark matter from the recent precise AMS-02 antiproton data. We consider all possible annihilation channels into pairs of standard model particles. Furthermore, we interpret our results in the context of minimal dark matter, including higgsino, wino and quintuplet dark matter. We compare the cosmic-ray antiproton limits to limits from γ-ray observations of dwarf spheroidal galaxies and to limits from γ-ray and γ-line observations towards the Galactic center. While the latter limits are highly dependent on the dark matter density distribution and only exclude a thermal wino for cuspy profiles, the cosmic-ray limits are more robust, strongly disfavoring the thermal wino dark matter scenario even for a conservative estimate of systematic uncertainties.

  5. Responding Intelligently when Would-Be Censors Charge: "That Book Can Make Them...!"

    ERIC Educational Resources Information Center

    Martinson, David L.

    2007-01-01

    School administrators and teachers need to recognize that most persons--including would-be censors of school-related media communications--simply do not understand the complexities germane to measuring the impact of the mass media and the specific messages transmitted to broader audiences via a variety of media channels. In particular, what most…

  6. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  7. The AD775 cosmic event revisited: the Sun is to blame

    NASA Astrophysics Data System (ADS)

    Usoskin, I. G.; Kromer, B.; Ludlow, F.; Beer, J.; Friedrich, M.; Kovaltsov, G. A.; Solanki, S. K.; Wacker, L.

    2013-04-01

    Aims: Miyake et al. (2012, Nature, 486, 240, henceforth M12) recently reported, based on 14C data, an extreme cosmic event in about AD775. Using a simple model, M12 claimed that the event was too strong to be caused by a solar flare within the standard theory. This implied a new paradigm of either an impossibly strong solar flare or a very strong cosmic ray event of unknown origin that occurred around AD775. However, as we show, the strength of the event was significantly overestimated by M12. Several subsequent works have attempted to find a possible exotic source for such an event, including a giant cometary impact upon the Sun or a gamma-ray burst, but they are all based on incorrect estimates by M12. We revisit this event with analysis of new datasets and consistent theoretical modelling. Methods: We verified the experimental result for the AD775 cosmic ray event using independent datasets including 10Be series and newly measured 14C annual data. We surveyed available historical chronicles for astronomical observations for the period around the AD770s to identify potential sightings of aurorae borealis and supernovae. We interpreted the 14C measurements using an appropriate carbon cycle model. Results: We show that: (1) The reality of the AD775 event is confirmed by new measurements of 14C in German oak; (2) by using an inappropriate carbon cycle model, M12 strongly overestimated the event's strength; (3) the revised magnitude of the event (the global 14C production Q = (1.1 - 1.5) × 108 atoms/cm2) is consistent with different independent datasets (14C, 10Be, 36Cl) and can be associated with a strong, but not inexplicably strong, solar energetic particle event (or a sequence of events), and provides the first definite evidence for an event of this magnitude (the fluence >30 MeV was about 4.5 × 1010 cm-2) in multiple datasets; (4) this interpretation is in agreement with increased auroral activity identified in historical chronicles. Conclusions: The results point to the likely solar origin of the event, which is now identified as the greatest solar event on a multi-millennial time scale, placing a strong observational constraint on the theory of explosive energy releases on the Sun and cool stars.

  8. Spaced-based Cosmic Ray Astrophysics

    NASA Astrophysics Data System (ADS)

    Seo, Eun-Suk

    2016-03-01

    The bulk of cosmic ray data has been obtained with great success by balloon-borne instruments, particularly with NASA's long duration flights over Antarctica. More recently, PAMELA on a Russian Satellite and AMS-02 on the International Space Station (ISS) started providing exciting measurements of particles and anti-particles with unprecedented precision upto TeV energies. In order to address open questions in cosmic ray astrophysics, future missions require spaceflight exposures for rare species, such as isotopes, ultra-heavy elements, and high (the ``knee'' and above) energies. Isotopic composition measurements up to about 10 GeV/nucleon that are critical for understanding interstellar propagation and origin of the elements are still to be accomplished. The cosmic ray composition in the knee (PeV) region holds a key to understanding the origin of cosmic rays. Just last year, the JAXA-led CALET ISS mission, and the DAMPE Chinese Satellite were launched. NASA's ISS-CREAM completed its final verification at GSFC, and was delivered to KSC to await launch on SpaceX. In addition, a EUSO-like mission for ultrahigh energy cosmic rays and an HNX-like mission for ultraheavy nuclei could accomplish a vision for a cosmic ray observatory in space. Strong support of NASA's Explorer Program category of payloads would be needed for completion of these missions over the next decade.

  9. Analysis of competition performance in dressage and show jumping of Dutch Warmblood horses.

    PubMed

    Rovere, G; Ducro, B J; van Arendonk, J A M; Norberg, E; Madsen, P

    2016-12-01

    Most Warmblood horse studbooks aim to improve the performance in dressage and show jumping. The Dutch Royal Warmblood Studbook (KWPN) includes the highest score achieved in competition by a horse to evaluate its genetic ability of performance. However, the records collected during competition are associated with some aspects that might affect the quality of the genetic evaluation based on these records. These aspects include the influence of rider, censoring and preselection of the data. The aim of this study was to quantify the impact of rider effect, censoring and preselection on the genetic analysis of competition data of dressage and show jumping of KWPN. Different models including rider effect were evaluated. To assess the impact of censoring, genetic parameters were estimated in data sets that differed in the degree of censoring. The effect of preselection on variance components was analysed by defining a binary trait (sport-status) depending on whether the horse has a competition record or not. This trait was included in a bivariate model with the competition trait and used all horses registered by KWPN since 1984. Results showed that performance in competition for dressage and show jumping is a heritable trait (h 2 ~ 0.11-0.13) and that it is important to account for the effect of rider in the genetic analysis. Censoring had a small effect on the genetic parameter for highest performance achieved by the horse. A moderate heritability obtained for sport-status indicates that preselection has a genetic basis, but the effect on genetic parameters was relatively small. © 2016 Blackwell Verlag GmbH.

  10. Analysis of elemental concentration censored distributions in breast malignant and breast benign neoplasm tissues

    NASA Astrophysics Data System (ADS)

    Kubala-Kukuś, A.; Banaś, D.; Braziewicz, J.; Góźdź, S.; Majewska, U.; Pajek, M.

    2007-07-01

    The total reflection X-ray fluorescence method was applied to study the trace element concentrations in human breast malignant and breast benign neoplasm tissues taken from the women who were patients of Holycross Cancer Centre in Kielce (Poland). These investigations were mainly focused on the development of new possibilities of cancer diagnosis and therapy monitoring. This systematic comparative study was based on relatively large (˜ 100) population studied, namely 26 samples of breast malignant and 68 samples of breast benign neoplasm tissues. The concentrations, being in the range from a few ppb to 0.1%, were determined for thirteen elements (from P to Pb). The results were carefully analysed to investigate the concentration distribution of trace elements in the studied samples. The measurements of concentration of trace elements by total reflection X-ray fluorescence were limited, however, by the detection limit of the method. It was observed that for more than 50% of elements determined, the concentrations were not measured in all samples. These incomplete measurements were treated within the statistical concept called left-random censoring and for the estimation of the mean value and median of censored concentration distributions, the Kaplan-Meier estimator was used. For comparison of concentrations in two populations, the log-rank test was applied, which allows to compare the censored total reflection X-ray fluorescence data. Found statistically significant differences are discussed in more details. It is noted that described data analysis procedures should be the standard tool to analyze the censored concentrations of trace elements analysed by X-ray fluorescence methods.

  11. DAMPE prototype and its beam test results at CERN

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Hu, Yiming; Chang, Jin

    The first Chinese high energy cosmic particle detector(DAMPE) aims to detect electron/gamma at the range between 5GeV and 10TeV in space. A prototype of this detector is made and tested using both cosmic muons and test beam at CERN. Energy and space resolution as well as strong separation power for electron and proton are shown in the results. The detector structure is illustrated as well.

  12. Thunderstorm observations by air-shower radio antenna arrays

    NASA Astrophysics Data System (ADS)

    Apel, W. D.; Arteaga, J. C.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Buchholz, P.; Buitink, S.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; di Pierro, F.; Doll, P.; Ender, M.; Engel, R.; Falcke, H.; Finger, M.; Fuhrmann, D.; Gemmeke, H.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Morello, C.; Nehls, S.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.; Zensus, J. A.

    2011-10-01

    Relativistic, charged particles present in extensive air showers (EAS) lead to a coherent emission of radio pulses which are measured to identify the shower initiating high-energy cosmic rays. Especially during thunderstorms, there are additional strong electric fields in the atmosphere, which can lead to further multiplication and acceleration of the charged particles and thus have influence on the form and strength of the radio emission. For a reliable energy reconstruction of the primary cosmic ray by means of the measured radio signal it is very important to understand how electric fields affect the radio emission. In addition, lightning strikes are a prominent source of broadband radio emissions that are visible over very long distances. This, on the one hand, causes difficulties in the detection of the much lower signal of the air shower. On the other hand the recorded signals can be used to study features of the lightning development. The detection of cosmic rays via the radio emission and the influence of strong electric fields on this detection technique is investigated with the LOPES experiment in Karlsruhe, Germany. The important question if a lightning is initiated by the high electron density given at the maximum of a high-energy cosmic-ray air shower is also investigated, but could not be answered by LOPES. But, these investigations exhibit the capabilities of EAS radio antenna arrays for lightning studies. We report about the studies of LOPES measured radio signals of air showers taken during thunderstorms and give a short outlook to new measurements dedicated to search for correlations of lightning and cosmic rays.

  13. International scientific cooperation during the 1930s. Bruno Rossi and the development of the status of cosmic rays into a branch of physics.

    PubMed

    Bonolis, Luisa

    2014-07-01

    During the 1920s and 1930s, Italian physicists established strong relationships with scientists from other European countries and the United States. The career of Bruno Rossi, a leading personality in the study of cosmic rays and an Italian pioneer of this field of research, provides a prominent example of this kind of international cooperation. Physics underwent major changes during these turbulent years, and the traditional internationalism of physics assumed a more institutionalized character. Against this backdrop, Rossi's early work was crucial in transforming the study of cosmic rays into a branch of modern physics. His friendly relationships with eminent scientists--notably Enrico Fermi, Walther Bothe, Werner Heisenberg, Hans Bethe, and Homi Bhabha--were instrumental both for the exchange of knowledge about experimental practices and theoretical discussions, and for attracting the attention of physicists such as Arthur Compton, Louis Leprince-Ringuet, Pierre Auger and Patrick Blackett to the problem of cosmic rays. Relying on material from different archives in Europe and the United States, this case study aims to provide a glimpse of the intersection between national and international dimensions during the 1930s, at a time when the study of cosmic rays was still very much in its infancy, strongly interlaced with nuclear physics, and full of uncertain, contradictory, and puzzling results. Nevertheless, as a source of high-energy particles it became a proving ground for testing the validity of the laws of quantum electrodynamics, and made a fundamental contribution to the origins of particle physics.

  14. Censoring Data for Resistance Factor Calculations in Load and Resistance Factor Design: A Preliminary Study

    Treesearch

    James W. Evans; David W. Green

    2007-01-01

    Reliability estimates for the resistance distribution of wood product properties may be made from test data where all specimens are broken (full data sets) or by using data sets where information is obtained only from the weaker pieces in the distribution (censored data). Whereas considerable information exists on property estimation from full data sets, much less...

  15. Estimating length of avian incubation and nestling stages in afrotropical forest birds from interval-censored nest records

    USGS Publications Warehouse

    Stanley, T.R.; Newmark, W.D.

    2010-01-01

    In the East Usambara Mountains in northeast Tanzania, research on the effects of forest fragmentation and disturbance on nest survival in understory birds resulted in the accumulation of 1,002 nest records between 2003 and 2008 for 8 poorly studied species. Because information on the length of the incubation and nestling stages in these species is nonexistent or sparse, our objectives in this study were (1) to estimate the length of the incubation and nestling stage and (2) to compute nest survival using these estimates in combination with calculated daily survival probability. Because our data were interval censored, we developed and applied two new statistical methods to estimate stage length. In the 8 species studied, the incubation stage lasted 9.6-21.8 days and the nestling stage 13.9-21.2 days. Combining these results with estimates of daily survival probability, we found that nest survival ranged from 6.0% to 12.5%. We conclude that our methodology for estimating stage lengths from interval-censored nest records is a reasonable and practical approach in the presence of interval-censored data. ?? 2010 The American Ornithologists' Union.

  16. Model Checking Techniques for Assessing Functional Form Specifications in Censored Linear Regression Models.

    PubMed

    León, Larry F; Cai, Tianxi

    2012-04-01

    In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.

  17. Connecting blazars with ultrahigh-energy cosmic rays and astrophysical neutrinos

    NASA Astrophysics Data System (ADS)

    Resconi, E.; Coenders, S.; Padovani, P.; Giommi, P.; Caccianiga, L.

    2017-06-01

    We present a strong hint of a connection between high-energy γ-ray emitting blazars, very high energy neutrinos, and ultrahigh-energy cosmic rays. We first identify potential hadronic sources by filtering γ-ray emitters in spatial coincidence with the high-energy neutrinos detected by IceCube. The neutrino filtered γ-ray emitters are then correlated with the ultrahigh-energy cosmic rays from the Pierre Auger Observatory and the Telescope Array by scanning in γ-ray flux (Fγ) and angular separation (θ) between sources and cosmic rays. A maximal excess of 80 cosmic rays (42.5 expected) is found at θ ≤ 10° from the neutrino-filtered γ-ray emitters selected from the second hard Fermi-LAT catalogue (2FHL) and for Fγ(>50 GeV) ≥ 1.8 × 10-11 ph cm-2 s-1. The probability for this to happen is 2.4 × 10-5, which translates to ˜2.4 × 10-3 after compensation for all the considered trials. No excess of cosmic rays is instead observed for the complement sample of γ-ray emitters (I.e. not in spatial connection with IceCube neutrinos). A likelihood ratio test comparing the connection between the neutrino-filtered and the complement source samples with the cosmic rays favours a connection between neutrino-filtered emitters and cosmic rays with a probability of ˜1.8 × 10-3 (2.9σ) after compensation for all the considered trials. The neutrino-filtered γ-ray sources that make up the cosmic rays excess are blazars of the high synchrotron peak type. More statistics is needed to further investigate these sources as candidate cosmic ray and neutrino emitters.

  18. Cosmic ray impact on extrasolar earth-like planets in close-in habitable zones.

    PubMed

    Griessmeier, J-M; Stadelmann, A; Motschmann, U; Belisheva, N K; Lammer, H; Biernat, H K

    2005-10-01

    Because of their different origins, cosmic rays can be subdivided into galactic cosmic rays and solar/stellar cosmic rays. The flux of cosmic rays to planetary surfaces is mainly determined by two planetary parameters: the atmospheric density and the strength of the internal magnetic moment. If a planet exhibits an extended magnetosphere, its surface will be protected from high-energy cosmic ray particles. We show that close-in extrasolar planets in the habitable zone of M stars are synchronously rotating with their host star because of the tidal interaction. For gravitationally locked planets the rotation period is equal to the orbital period, which is much longer than the rotation period expected for planets not subject to tidal locking. This results in a relatively small magnetic moment. We found that an Earth-like extrasolar planet, tidally locked in an orbit of 0.2 AU around an M star of 0.5 solar masses, has a rotation rate of 2% of that of the Earth. This results in a magnetic moment of less than 15% of the Earth's current magnetic moment. Therefore, close-in extrasolar planets seem not to be protected by extended Earth-like magnetospheres, and cosmic rays can reach almost the whole surface area of the upper atmosphere. Primary cosmic ray particles that interact with the atmosphere generate secondary energetic particles, a so-called cosmic ray shower. Some of the secondary particles can reach the surface of terrestrial planets when the surface pressure of the atmosphere is on the order of 1 bar or less. We propose that, depending on atmospheric pressure, biological systems on the surface of Earth-like extrasolar planets at close-in orbital distances can be strongly influenced by secondary cosmic rays.

  19. Steady state and dynamical structure of a cosmic-ray-modified termination shock

    NASA Technical Reports Server (NTRS)

    Donohue, D. J.; Zank, G. P.

    1993-01-01

    A hydrodynamic model is developed for the structure of a cosmic-ray-modified termination shock. The model is based on the two-fluid equations of diffuse shock acceleration (Drury and Volk, 1981). Both the steady state structure of the shock and its interaction with outer heliospheric disturbances are considered. Under the assumption that the solar wind is decelerated by diffusing interstellar cosmic rates, it is shown that the natural state of the termination shock is a gradual deceleration and compression, followed by a discontinuous jump to a downstream state which is dominated by the pressure contribution of the cosmic rays. A representative model is calculated for the steady state which incorporates both interstellar cosmic ray mediation and diffusively accelerated anomalous ions through a proposed thermal leakage mechanism. The interaction of large-scale disturbances with the equilibrium termination shock model is shown to result in some unusual downstream structure, including transmitted shocks and cosmic-ray-modified contact discontinuities. The structure observed may be connected to the 2-kHz outer heliospheric radio emission (Cairns et al., 1992a, b). The time-dependent simulations also demonstrate that interaction with solar wind compressible turbulence (e.g., traveling interplanetary shocks, etc.) could induce the termination shock to continually fluctuate between cosmic-ray-dominated and gas-dynamic states. This fluctuation may represent a partial explanation of the galactic cosmic ray modulation effect and illustrates that the Pioneer and Voyager satellites will encounter an evolving shock whose structure and dynamic properties are strongly influence by the mediation of interstellar and anomalous cosmic rays.

  20. Steady state and dynamical structure of a cosmic-ray-modified termination shock

    NASA Astrophysics Data System (ADS)

    Donohue, D. J.; Zank, G. P.

    1993-11-01

    A hydrodynamic model is developed for the structure of a cosmic-ray-modified termination shock. The model is based on the two-fluid equations of diffuse shock acceleration (Drury and Volk, 1981). Both the steady state structure of the shock and its interaction with outer heliospheric disturbances are considered. Under the assumption that the solar wind is decelerated by diffusing interstellar cosmic rates, it is shown that the natural state of the termination shock is a gradual deceleration and compression, followed by a discontinuous jump to a downstream state which is dominated by the pressure contribution of the cosmic rays. A representative model is calculated for the steady state which incorporates both interstellar cosmic ray mediation and diffusively accelerated anomalous ions through a proposed thermal leakage mechanism. The interaction of large-scale disturbances with the equilibrium termination shock model is shown to result in some unusual downstream structure, including transmitted shocks and cosmic-ray-modified contact discontinuities. The structure observed may be connected to the 2-kHz outer heliospheric radio emission (Cairns et al., 1992a, b). The time-dependent simulations also demonstrate that interaction with solar wind compressible turbulence (e.g., traveling interplanetary shocks, etc.) could induce the termination shock to continually fluctuate between cosmic-ray-dominated and gas-dynamic states. This fluctuation may represent a partial explanation of the galactic cosmic ray modulation effect and illustrates that the Pioneer and Voyager satellites will encounter an evolving shock whose structure and dynamic properties are strongly influence by the mediation of interstellar and anomalous cosmic rays.

  1. Cosmology with Strong-lensing Systems

    NASA Astrophysics Data System (ADS)

    Cao, Shuo; Biesiada, Marek; Gavazzi, Raphaël; Piórkowska, Aleksandra; Zhu, Zong-Hong

    2015-06-01

    In this paper, we assemble a catalog of 118 strong gravitational lensing systems from the Sloan Lens ACS Survey, BOSS emission-line lens survey, Lens Structure and Dynamics, and Strong Lensing Legacy Survey and use them to constrain the cosmic equation of state. In particular, we consider two cases of dark energy phenomenology: the XCDM model, where dark energy is modeled by a fluid with constant w equation-of-state parameter, and in the Chevalier-Polarski-Linder (CPL) parameterization, where w is allowed to evolve with redshift, w(z)={{w}0}+{{w}1}\\frac{z}{1 + z} . We assume spherically symmetric mass distribution in lensing galaxies, but we relax the rigid assumption of the SIS model in favor of a more general power-law index γ, also allowing it to evolve with redshifts γ (z). Our results for the XCDM cosmology show agreement with values (concerning both w and γ parameters) obtained by other authors. We go further and constrain the CPL parameters jointly with γ (z). The resulting confidence regions for the parameters are much better than those obtained with a similar method in the past. They are also showing a trend of being complementary to the Type Ia supernova data. Our analysis demonstrates that strong gravitational lensing systems can be used to probe cosmological parameters like the cosmic equation of state for dark energy. Moreover, they have a potential to judge whether the cosmic equation of state evolved with time or not.

  2. The Pierre Auger Observatory scaler mode for the study of solar activity modulation of galactic cosmic rays

    NASA Astrophysics Data System (ADS)

    Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahn, E. J.; Allard, D.; Allekotte, I.; Allen, J.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Anzalone, A.; Aramo, C.; Arganda, E.; Arisaka, K.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Badagnani, D.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bergmann, T.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Busca, N. G.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Colombo, E.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Cotti, U.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; del Peral, L.; Deligny, O.; Della Selva, A.; Dembinski, H.; Denkiewicz, A.; Di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fleck, I.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Fulgione, W.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Garrido, X.; Gascon, A.; Gelmini, G.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Holmes, V. C.; Homola, P.; Hórandel, J. R.; Horneffer, A.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jiraskova, S.; Kadija, K.; Kaducak, M.; Kampert, K. H.; Karhan, P.; Karova, T.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Meurer, C.; Mičanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nhung, P. T.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parrisius, J.; Parsons, R. D.; Pastor, S.; Paul, T.; Pavlidou c, V.; Payet, K.; Pech, M.; Pękala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rivière, C.; Rizi, V.; Robledo, C.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Salamida, F.; Salazar, H.; Salina, G.; Sánchez, F.; Santander, M.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Schmidt, T.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schroeder, F.; Schulte, S.; Schüssler, F.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Semikoz, D.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stapleton, J.; Stasielak, J.; Stephan, M.; Strazzeri, E.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Tamashiro, A.; Tapia, A.; Tarutina, T.; Taşcǎu, O.; Tcaciuc, R.; Tcherniakhovski, D.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Venters, T.; Verzi, V.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winders, L.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto a, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Ziolkowski, M.

    2011-01-01

    Since data-taking began in January 2004, the Pierre Auger Observatory has been recording the count rates of low energy secondary cosmic ray particles for the self-calibration of the ground detectors of its surface detector array. After correcting for atmospheric effects, modulations of galactic cosmic rays due to solar activity and transient events are observed. Temporal variations related with the activity of the heliosphere can be determined with high accuracy due to the high total count rates. In this study, the available data are presented together with an analysis focused on the observation of Forbush decreases, where a strong correlation with neutron monitor data is found.

  3. Diffuse flux of galactic neutrinos and gamma rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carceller, J.M.; Masip, M., E-mail: jmcarcell@correo.ugr.es, E-mail: masip@ugr.es

    We calculate the fluxes of neutrinos and gamma rays from interactions of cosmic rays with interstellar matter in our galaxy. We use EPOS-LHC, SIBYLL and GHEISHA to parametrize the yield of these particles in proton, helium and iron collisions at kinetic energies between 1 and 10{sup 8} GeV, and we correlate the cosmic ray density with the mean magnetic field strength in the disk and the halo of our galaxy. We find that at E > 1 PeV the fluxes depend very strongly on the cosmic-ray composition, whereas at 1–5 GeV the main source of uncertainty is the cosmic-ray spectrummore » out of the heliosphere. We show that the diffuse flux of galactic neutrinos becomes larger than the conventional atmospheric one at E >1 PeV, but that at all IceCube energies it is 4 times smaller than the atmospheric flux from forward-charm decays.« less

  4. Opportunities in cosmic-ray physics and astrophysics

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Board on Physics and Astronomy of the National Research Council established the Committee on Cosmic-Ray Physics to prepare a review of the field that addresses both experimental and theoretical aspects of the origin of cosmic radiation from outside the heliosphere. The following recommendations are made: NASA should provide the opportunity to measure cosmic-ray electrons, positrons, ultraheavy nuclei, isotopes, and antiparticles in space; NASA, the National Science Foundation (NSF), and the Department of Energy (DOE) should facilitate direct and indirect measurement of the elemental composition to as high an energy as possible, for which the support of long-duration ballooning and hybrid ground arrays will be needed; NSF and DOE should support the new Fly's Eye and provide for U.S. participation in the big projects on the horizon, which include giant arrays, ground-based gamma-ray astronomy, and neutrino telescopes; and NASA, NSF, and DOE should support a strong program of relevant theoretical investigations.

  5. Progress towards a measurement of the UHE cosmic ray electron flux using the CREST Instrument

    NASA Astrophysics Data System (ADS)

    Musser, Jim; Wakely, Scott; Coutu, Stephane; Geske, Matthew; Nutter, Scott; Tarle, Gregory; Park, Nahee; Schubnell, Michael; Gennaro, Joseph; Muller, Dietrich

    2012-07-01

    Electrons of energy beyond about 3 TeV have never been detected in the flux of cosmic rays at Earth despite strong evidence of their presence in a number of supernova remnants (e.g., SN 1006). The detection of high energy electrons at Earth would be extremely significant, yielding information about the spatial distribution of nearby cosmic ray sources. With the Cosmic Ray Electron Synchrotron Telescope (CREST), our collaboration has adopted a novel approach to the detection of electrons of energies between 2 and 50 TeV which results in a substantial increase in the acceptance and sensitivity of the apparatus relative to its physics size. The first LDB flight of the CREST detector took place in January 2012, with a float duration of approximately 10 days. In this paper we describe the flight performance of the instrument, and progress in the analysis of the data obtained in this flight.

  6. Weak cosmic censorship: as strong as ever.

    PubMed

    Hod, Shahar

    2008-03-28

    Spacetime singularities that arise in gravitational collapse are always hidden inside of black holes. This is the essence of the weak cosmic censorship conjecture. The hypothesis, put forward by Penrose 40 years ago, is still one of the most important open questions in general relativity. In this Letter, we reanalyze extreme situations which have been considered as counterexamples to the weak cosmic censorship conjecture. In particular, we consider the absorption of scalar particles with large angular momentum by a black hole. Ignoring back reaction effects may lead one to conclude that the incident wave may overspin the black hole, thereby exposing its inner singularity to distant observers. However, we show that when back reaction effects are properly taken into account, the stability of the black-hole event horizon is irrefutable. We therefore conclude that cosmic censorship is actually respected in this type of gedanken experiments.

  7. Assessing assay agreement estimation for multiple left-censored data: a multiple imputation approach.

    PubMed

    Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu

    2014-12-30

    Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Dental age assessment of adolescents and emerging adults in United Kingdom Caucasians using censored data for stage H of third molar roots.

    PubMed

    Boonpitaksathit, Teelana; Hunt, Nigel; Roberts, Graham J; Petrie, Aviva; Lucas, Victoria S

    2011-10-01

    The root of the third permanent molar is the only dental structure that continues development after completion of growth of the second permanent molar. It is claimed that the lack of a clearly defined end point for completion of growth of the third permanent molar means that this tooth cannot be used for dental age assessment. The aim of this study was to estimate the mean age of attainment of the four stages (E, F, G, and H) of root development of the third molar. The way in which the end point of completion of stage H can be identified is described. A total of 1223 dental panoramic tomographs (DPTs) available in the archives of the Eastman Dental Hospital, London, were used for this study. The ages of the subjects ranged from 12.6 to 24.9 years with 63 per cent of the sample being female. Demirjan's tooth development stages (TDSs), for the first and second molars, were applied to the third molars by a single examiner. For each of stages E, F, and G and for stage H censored data, the mean ages of the males and females were compared, separately within each tooth morphology type using the two sample t-test (P < 0.01). The same test was used to compare the mean ages of the upper and lower third molars on each side, separately for each gender. The mean age of attainment and the 99 per cent confidence interval (CI) for each TDS were calculated for each third molar. The final stage H data were appropriately censored to exclude data above the age of completion of root growth. The results showed that, for each gender, the age in years at which individuals attained each of the four TDSs was approximately normally distributed. The mean age for appropriately censored data was always lower than the corresponding mean age of the inappropriately censored data for stage H (male UR8 19.57, UL8 19.53, LL8 19.91, and LR8 20.02 and female UR8 20.08, UL8 20.13, LL8 20.78, and LR8 20.70). This inappropriately censored data overestimated the mean age for stage H. The appropriately censored data for the TDSs of the third molar may be used to estimate the age of adolescents and emerging adults assuming average growth and development and recent attainment of stage H.

  9. Sino-Japanese Relations: Cooperation, Competition, or Status Quo?

    DTIC Science & Technology

    2008-03-01

    prostitute and censors were concerned the film might reignite anti- Japanese sentiment.69 Regarding Prime Minister Abe’s potentially nationalistic visit...central government censored the movie “Memoirs of a Geisha” because the lead character, portrayed by a Chinese actress, could be construed as a...Thailand, Malaysia and Indonesia. Realizing the importance of the larger relationship, on September 1-3, 2007, Defense Minister Masahiko Komura met

  10. Stochastic Acceleration of Galactic Cosmic Rays by Compressible Plasma Fluctuations in Supernova Shells

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    2015-10-01

    A theory of 2-stage acceleration of Galactic cosmic rays in supernova remnants is proposed. The first stage is accomplished by the supernova shock front, where a power-law spectrum is established up to a certain cutoff energy. It is followed by stochastic acceleration with compressible waves/turbulence in the downstream medium. With a broad \\propto {k}-2 spectrum for the compressible plasma fluctuations, the rate of stochastic acceleration is constant over a wide range of particle momentum. In this case, the stochastic acceleration process extends the power-law spectrum cutoff energy of Galactic cosmic rays to the knee without changing the spectral slope. This situation happens as long as the rate of stochastic acceleration is faster than 1/5 of the adiabatic cooling rate. A steeper spectrum of compressible plasma fluctuations that concentrate their power in long wavelengths will accelerate cosmic rays to the knee with a small bump before its cutoff in the comic-ray energy spectrum. This theory does not require a strong amplification of the magnetic field in the upstream interstellar medium in order to accelerate cosmic rays to the knee energy.

  11. ACCOUNTING FOR COSMIC VARIANCE IN STUDIES OF GRAVITATIONALLY LENSED HIGH-REDSHIFT GALAXIES IN THE HUBBLE FRONTIER FIELD CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Brant E.; Stark, Dan P.; Ellis, Richard S.

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ∼35% atmore » redshift z ∼ 7 to ≳ 65% at z ∼ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.« less

  12. A search for X-ray polarization in cosmic X-ray sources. [binary X-ray sources and supernovae remnants

    NASA Technical Reports Server (NTRS)

    Hughes, J. P.; Long, K. S.; Novick, R.

    1983-01-01

    Fifteen strong X-ray sources were observed by the X-ray polarimeters on board the OSO-8 satellite from 1975 to 1978. The final results of this search for X-ray polarization in cosmic sources are presented in the form of upper limits for the ten sources which are discussed elsewhere. These limits in all cases are consistent with a thermal origin for the X-ray emission.

  13. The Complexities of Interstellar Dust and the Implications for the Small-scale Structure in the Cosmic Microwave Background

    NASA Astrophysics Data System (ADS)

    Verschuur, G. L.; Schmelz, J. T.

    2018-02-01

    A detailed comparison of the full range of PLANCK and Wilkinson Microwave Anisotropy Probe data for small (2° × 2°) areas of sky and the Cosmic Microwave Background Internal Linear Combination (ILC) maps reveals that the structure of foreground dust may be more complex than previously thought. If 857 and 353 GHz emission is dominated by galactic dust at a distance < few hundred light years, then it should not resemble the cosmological ILC structure originating at a distance ∼13 billion light years. In some areas of sky, however, we find strong morphological correlations, forcing us to consider the possibility that the foreground subtraction is not complete. Our data also show that there is no single answer for the question: “to what extent does dust contaminate the cosmologically important 143 GHz data?” In some directions, the contamination appears to be quite strong, but in others, it is less of an issue. This complexity needs to be taken in account in order to derive an accurate foreground mask in the quest to understand the Cosmic Microwave Background small-scale structure. We hope that a continued investigation of these data will lead to a definitive answer to the question above and, possibly, to new scientific insights on interstellar matter, the Cosmic Microwave Background, or both.

  14. AURORAL X-RAYS, COSMIC RAYS, AND RELATED PHENOMENA DURING THE STORM OF FEBRUARY 10-11, 1958

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winckler, J.R.; Peterson, L.; Hoffman, R.

    1959-06-01

    Balloon observations were made during the auroral storm of February 10- 11, 1958, at Minneapolis. Strong x-ray bursts in two groups were detected. The groups appeared coincident with two large magnetic bays, with strong radio noise absorption, and with the passage across the zenith of a very large amount of auroral luminosity. From the x-ray intensity and measured energies, an electron current of 0.6 x 10/sup 6/ electrons /cm/sup 2// scc was present. These electrons ionizing the upper D layer accounted for the increased cosmic noise absorption. The x-rays themselves carried 1000 times less energy than the electrons and couldmore » not provide sufficient ionization for the observed radio absorption. Visual auroral fornis during this storm are reported to have lower borders at thc 200 to 300 km level. There is thus a difficulty in bringing the electrons to the D layer without ani accompanying visible aurora. A cosmic-ray decrease accompanied the storm and was observed to be from 4 to 6% at sea level, 21% in the balloon altitude ionization, and 15% in total energy influx at 55 deg geomagnetic latitude. Compared with the great intensity of the magnetic and auroral phenomena in this storm, the cosmic-ray modulation was not exceptionally large. (auth)« less

  15. Censored Glauber Dynamics for the Mean Field Ising Model

    NASA Astrophysics Data System (ADS)

    Ding, Jian; Lubetzky, Eyal; Peres, Yuval

    2009-11-01

    We study Glauber dynamics for the Ising model on the complete graph on n vertices, known as the Curie-Weiss Model. It is well known that at high temperature ( β<1) the mixing time is Θ( nlog n), whereas at low temperature ( β>1) it is exp ( Θ( n)). Recently, Levin, Luczak and Peres considered a censored version of this dynamics, which is restricted to non-negative magnetization. They proved that for fixed β>1, the mixing-time of this model is Θ( nlog n), analogous to the high-temperature regime of the original dynamics. Furthermore, they showed cutoff for the original dynamics for fixed β<1. The question whether the censored dynamics also exhibits cutoff remained unsettled. In a companion paper, we extended the results of Levin et al. into a complete characterization of the mixing-time for the Curie-Weiss model. Namely, we found a scaling window of order 1/sqrt{n} around the critical temperature β c =1, beyond which there is cutoff at high temperature. However, determining the behavior of the censored dynamics outside this critical window seemed significantly more challenging. In this work we answer the above question in the affirmative, and establish the cutoff point and its window for the censored dynamics beyond the critical window, thus completing its analogy to the original dynamics at high temperature. Namely, if β=1+ δ for some δ>0 with δ 2 n→∞, then the mixing-time has order ( n/ δ)log ( δ 2 n). The cutoff constant is (1/2+[2(ζ2 β/ δ-1)]-1), where ζ is the unique positive root of g( x)=tanh ( β x)- x, and the cutoff window has order n/ δ.

  16. Accelerated failure time models for semi-competing risks data in the presence of complex censoring.

    PubMed

    Lee, Kyu Ha; Rondeau, Virginie; Haneuse, Sebastien

    2017-12-01

    Statistical analyses that investigate risk factors for Alzheimer's disease (AD) are often subject to a number of challenges. Some of these challenges arise due to practical considerations regarding data collection such that the observation of AD events is subject to complex censoring including left-truncation and either interval or right-censoring. Additional challenges arise due to the fact that study participants under investigation are often subject to competing forces, most notably death, that may not be independent of AD. Towards resolving the latter, researchers may choose to embed the study of AD within the "semi-competing risks" framework for which the recent statistical literature has seen a number of advances including for the so-called illness-death model. To the best of our knowledge, however, the semi-competing risks literature has not fully considered analyses in contexts with complex censoring, as in studies of AD. This is particularly the case when interest lies with the accelerated failure time (AFT) model, an alternative to the traditional multiplicative Cox model that places emphasis away from the hazard function. In this article, we outline a new Bayesian framework for estimation/inference of an AFT illness-death model for semi-competing risks data subject to complex censoring. An efficient computational algorithm that gives researchers the flexibility to adopt either a fully parametric or a semi-parametric model specification is developed and implemented. The proposed methods are motivated by and illustrated with an analysis of data from the Adult Changes in Thought study, an on-going community-based prospective study of incident AD in western Washington State. © 2017, The International Biometric Society.

  17. Methods for analysis of the occurrence of abscess in patients with pancreatitis.

    PubMed

    Roca-Antonio, J; Escudero, L E; Gener, J; Oller, B; Rodríguez, N; Muñoz, A

    1997-01-01

    Standard survival analysis methods are useful for data involving censored cases when cures do not generally occur. If the object is to study, for instance, the development of a complication in the progress of an infectious disease, some people may be cured before complications develop. In this article, we provide methods for the analysis of data when cures do occur. An example is a study of prognostic factors for pancreatic abscess in patients with pancreatitis, some of whom leave the risk set because the pancreatitis clears. We present methods for estimating the survival curves and comparing hazard function for two objectives: (1) the occurrence of an abscess, irrespective of whether the patients are cured or not, and (2) the occurrence of an abscess for patients who, at that stage, have not been cured. We illustrate the applications of the methods using a sample of 50 patients with severe pancreatitis. To study the occurrence of an abscess, regardless of whether the patients are cured or not, we show that the appropriate strategy is to assign to the cured patients an infinite time to the appearance of an abscess. If the cured were considered censored at the moment the pancreatitis cleared, this would result in an overestimation of the hazard of presenting an abscess. On the other hand, if the objective is to compare the occurrence of abscess according to an exposure for patients who have not been cured, one needs to censor the cured patients at the time they are cured. For the analysis of survival data in the context of infectious diseases when cure is possible, it is important to use a censoring strategy that is pertinent to the specific aims of the study. Considering cures as censored at the time of cure is not always appropriate.

  18. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  19. Measuring agreement of multivariate discrete survival times using a modified weighted kappa coefficient.

    PubMed

    Guo, Ying; Manatunga, Amita K

    2009-03-01

    Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We present a modified weighted kappa coefficient to measure agreement between bivariate discrete survival times. The proposed kappa coefficient accommodates censoring by redistributing the mass of censored observations within the grid where the unobserved events may potentially happen. A generalized modified weighted kappa is proposed for multivariate discrete survival times. We estimate the modified kappa coefficients nonparametrically through a multivariate survival function estimator. The asymptotic properties of the kappa estimators are established and the performance of the estimators are examined through simulation studies of bivariate and trivariate survival times. We illustrate the application of the modified kappa coefficient in the presence of censored observations with data from a prostate cancer study.

  20. An estimator of the survival function based on the semi-Markov model under dependent censorship.

    PubMed

    Lee, Seung-Yeoun; Tsai, Wei-Yann

    2005-06-01

    Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.

  1. Political science. Reverse-engineering censorship in China: randomized experimentation and participant observation.

    PubMed

    King, Gary; Pan, Jennifer; Roberts, Margaret E

    2014-08-22

    Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and--with their software, documentation, and even customer support--reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored. Copyright © 2014, American Association for the Advancement of Science.

  2. Cosmic Rays in Thunderstorms

    NASA Astrophysics Data System (ADS)

    Buitink, Stijn; Scholten, Olaf; van den Berg, Ad; Ebert, Ute

    2013-04-01

    Cosmic Rays in Thunderstorms Cosmic rays are protons and heavier nuclei that constantly bombard the Earth's atmosphere with energies spanning a vast range from 109 to 1021 eV. At typical altitudes up to 10-20 km they initiate large particle cascades, called extensive air showers, that contain millions to billions of secondary particles depending on their initial energy. These particles include electrons, positrons, hadrons and muons, and are concentrated in a compact particle front that propagates at relativistic speed. In addition, the shower leaves behind a trail of lower energy electrons from ionization of air molecules. Under thunderstorm conditions these electrons contribute to the electrical and ionization processes in the cloud. When the local electric field is strong enough the secondary electrons can create relativistic electron run-away avalanches [1] or even non-relativistic avalanches. Cosmic rays could even trigger lightning inception. Conversely, strong electric fields also influence the development of the air shower [2]. Extensive air showers emit a short (tens of nanoseconds) radio pulse due to deflection of the shower particles in the Earth's magnetic field [3]. Antenna arrays, such as AERA, LOFAR and LOPES detect these pulses in a frequency window of roughly 10-100 MHz. These systems are also sensitive to the radiation from discharges associated to thunderstorms, and provide a means to study the interaction of cosmic ray air showers and the electrical processes in thunderstorms [4]. In this presentation we discuss the involved radiation mechanisms and present analyses of thunderstorm data from air shower arrays [1] A. Gurevich et al., Phys. Lett. A 165, 463 (1992) [2] S. Buitink et al., Astropart. Phys. 33, 1 (2010) [3] H. Falcke et al., Nature 435, 313 (2005) [4] S. Buitink et al., Astron. & Astrophys. 467, 385 (2007)

  3. USSR Report, International Affairs

    DTIC Science & Technology

    1986-05-28

    examined on the material of four countries in Southeast Asia: Indonesia, Malaysia , Thailand and the Philippines. In his study, the author proceeded...television [as published] are guaranteed. There is no censorship." In other words, in the FRG there are no official censors , and in West German...34: "The Federal Republic does not need an official censor , for self-censorship—above all among the bosses of the mass media, i. e., television, radio

  4. Causal inference in survival analysis using pseudo-observations.

    PubMed

    Andersen, Per K; Syriopoulou, Elisavet; Parner, Erik T

    2017-07-30

    Causal inference for non-censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization ('G-formula') or (2) inverse probability of treatment assignment weights ('propensity score'). To do causal inference in survival analysis, one needs to address right-censoring, and often, special techniques are required for that purpose. We will show how censoring can be dealt with 'once and for all' by means of so-called pseudo-observations when doing causal inference in survival analysis. The pseudo-observations can be used as a replacement of the outcomes without censoring when applying 'standard' causal inference methods, such as (1) or (2) earlier. We study this idea for estimating the average causal effect of a binary treatment on the survival probability, the restricted mean lifetime, and the cumulative incidence in a competing risks situation. The methods will be illustrated in a small simulation study and via a study of patients with acute myeloid leukemia who received either myeloablative or non-myeloablative conditioning before allogeneic hematopoetic cell transplantation. We will estimate the average causal effect of the conditioning regime on outcomes such as the 3-year overall survival probability and the 3-year risk of chronic graft-versus-host disease. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.

    PubMed

    Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen

    2015-05-01

    Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.

  6. Cosmic rays from primordial black holes

    NASA Technical Reports Server (NTRS)

    Macgibbon, Jane H.; Carr, B. J.

    1991-01-01

    The quark and gluon emission from primordial black holes (PBHs) which may have formed from initial density perturbations or phase transitions in the early universe are investigated. If the PBHs formed from scale-invariant initial density perturbations in the radiation dominated era, it is found that the emission can explain or contribute significantly to the extragalactic photon and interstellar cosmic-ray electron, positron, and antiproton spectra around 0.1-1 GeV. In particular, the PBH emission strongly resembles the cosmic-ray gamma-ray spectrum between 50 and 170 MeV. The upper limits on the PBH density today from the gamma-ray, e(+), e(-), and antiproton data are comparable, provided that the PBHs cluster to the same degree as the other matter in the Galactic halo.

  7. CMB temperature trispectrum of cosmic strings

    NASA Astrophysics Data System (ADS)

    Hindmarsh, Mark; Ringeval, Christophe; Suyama, Teruaki

    2010-03-01

    We provide an analytical expression for the trispectrum of the cosmic microwave background (CMB) temperature anisotropies induced by cosmic strings. Our result is derived for the small angular scales under the assumption that the temperature anisotropy is induced by the Gott-Kaiser-Stebbins effect. The trispectrum is predicted to decay with a noninteger power-law exponent ℓ-ρ with 6<ρ<7, depending on the string microstructure, and thus on the string model. For Nambu-Goto strings, this exponent is related to the string mean square velocity and the loop distribution function. We then explore two classes of wave number configuration in Fourier space, the kite and trapezium quadrilaterals. The trispectrum can be of any sign and appears to be strongly enhanced for all squeezed quadrilaterals.

  8. The Pierre Auger Observatory scaler mode for the study of solar activity modulation of galactic cosmic rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abreu, P.; /Lisbon, LIFEP /Lisbon, IST; Aglietta, M.

    2011-01-01

    Since data-taking began in January 2004, the Pierre Auger Observatory has been recording the count rates of low energy secondary cosmic ray particles for the self-calibration of the ground detectors of its surface detector array. After correcting for atmospheric effects, modulations of galactic cosmic rays due to solar activity and transient events are observed. Temporal variations related with the activity of the heliosphere can be determined with high accuracy due to the high total count rates. In this study, the available data are presented together with an analysis focused on the observation of Forbush decreases, where a strong correlation withmore » neutron monitor data is found.« less

  9. The Outer Heliosphere: Solar Wind, Cosmic Ray and VLF Radio Emission Variations

    NASA Technical Reports Server (NTRS)

    McNutt, Ralph L., Jr.

    1995-01-01

    The Voyager 1 and 2 spacecraft now 45 astronomical units (AU) from Earth continue to monitor the outer heliosphere field and particles environment on a daily basis during their journey to the termination shock of the solar wind. Strong transient shocks continue to be detected in the solar wind plasma. The largest of these are associated with Global Merged Interaction Regions (GMIR's) which, in turn, block cosmic ray entry into the inner heliosphere and are apparently responsible for triggering the two major episodes of VLF radio emissions now thought to come from the heliopause. Distance estimates to the termination shock are consistent with those determined from observations of anomalous cosmic rays. Current observations and implications for heliospheric structure are discussed.

  10. The spectrum of high-energy cosmic rays measured with KASCADE-Grande

    NASA Astrophysics Data System (ADS)

    Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Buchholz, P.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuhrmann, D.; Ghia, P. L.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Kickelbick, D.; Klages, H. O.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Milke, J.; Mitrica, B.; Morello, C.; Navarra, G.; Oehlschläger, J.; Ostapchenko, S.; Over, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.

    2012-08-01

    The energy spectrum of cosmic rays between 1016 eV and 1018 eV, derived from measurements of the shower size (total number of charged particles) and the total muon number of extensive air showers by the KASCADE-Grande experiment, is described. The resulting all-particle energy spectrum exhibits strong hints for a hardening of the spectrum at approximately 2 · 1016 eV and a significant steepening at ≈8 · 1016 eV. These observations challenge the view that the spectrum is a single power law between knee and ankle. Possible scenarios generating such features are discussed in terms of astrophysical processes that may explain the transition region from galactic to extragalactic origin of cosmic rays.

  11. Large-scale structure from cosmic-string loops in a baryon-dominated universe

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Scherrer, Robert J.

    1988-01-01

    The results are presented of a numerical simulation of the formation of large-scale structure in a universe with Omega(0) = 0.2 and h = 0.5 dominated by baryons in which cosmic strings provide the initial density perturbations. The numerical model yields a power spectrum. Nonlinear evolution confirms that the model can account for 700 km/s bulk flows and a strong cluster-cluster correlation, but does rather poorly on smaller scales. There is no visual 'filamentary' structure, and the two-point correlation has too steep a logarithmic slope. The value of G mu = 4 x 10 to the -6th is significantly lower than previous estimates for the value of G mu in baryon-dominated cosmic string models.

  12. Galaxy properties and the cosmic web in simulations

    NASA Astrophysics Data System (ADS)

    Metuki, Ofer; Libeskind, Noam I.; Hoffman, Yehuda; Crain, Robert A.; Theuns, Tom

    2015-01-01

    We seek to understand the relationship between galaxy properties and their local environment, which calls for a proper formulation of the notion of environment. We analyse the Galaxies-Intergalactic Medium Interaction Calculation suite of cosmological hydrodynamical simulations within the framework of the cosmic web as formulated by Hoffman et al., focusing on properties of simulated dark matter haloes and luminous galaxies with respect to voids, sheets, filaments, and knots - the four elements of the cosmic web. We find that the mass functions of haloes depend on environment, which drives other environmental dependence of galaxy formation. The web shapes the halo mass function, and through the strong dependence of the galaxy properties on the mass of their host haloes, it also shapes the galaxy-(web) environment dependence.

  13. Quasinormal modes and strong cosmic censorship in near-extremal Kerr-Newman-de Sitter black-hole spacetimes

    NASA Astrophysics Data System (ADS)

    Hod, Shahar

    2018-05-01

    The quasinormal resonant modes of massless neutral fields in near-extremal Kerr-Newman-de Sitter black-hole spacetimes are calculated in the eikonal regime. It is explicitly proved that, in the angular momentum regime a bar >√{1 - 2 Λ bar/4 + Λ bar / 3 }, the black-hole spacetimes are characterized by slowly decaying resonant modes which are described by the compact formula ℑ ω (n) =κ+ ṡ (n + 1/2 ) [here the physical parameters { a bar ,κ+ , Λ bar , n } are respectively the dimensionless angular momentum of the black hole, its characteristic surface gravity, the dimensionless cosmological constant of the spacetime, and the integer resonance parameter]. Our results support the validity of the Penrose strong cosmic censorship conjecture in these black-hole spacetimes.

  14. An application of a zero-inflated lifetime distribution with multiple and incomplete data sources

    DOE PAGES

    Hamada, M. S.; Margevicius, K. J.

    2016-02-11

    In this study, we analyze data sampled from a population of parts in which an associated anomaly can occur at assembly or after assembly. Using a zero-inflated lifetime distribution to fit left-censored and right-censored data as well data from a supplementary sample, we make predictions about the proportion of the population with anomalies today and in the future. Goodness-of-fit is also addressed.

  15. Another Velvet Revolution Implications of the 1989 Czech Velvet Revolution on Iran

    DTIC Science & Technology

    2011-06-01

    countries; “even censoring news from the Soviet Union, whose own period of glasnost precipitated all these gyrations.”1 Furthermore, the failure of the... America for having maliciously presented the report. For his action of passing along information to Western journalists on the reports of Smid‟s...their coverage of the demonstrations was censored . Video coverage of the demonstrations was often televised as a deterrence mechanism, meanwhile news

  16. Counterinsurgency in Brazil: Lessons of the Fighting from 1968 to 1974

    DTIC Science & Technology

    2010-04-12

    system over almost all information disseminated in the press, theaters, movies and music. Government agents worked as censors inside press agencies...articles, letter of songs and scenes from movies that were judged as being subversive were suppressed by censors . Under the military instrument of national...the maintenance of its influence in Latin America Previous to the military coup d’etat on 31 March 1964, U. S. President Lyndon Johnson had already

  17. Repression, Civil Conflict and Leadership Tenure: The Thai Case Study: 2006-2014

    DTIC Science & Technology

    2015-05-30

    peaceful protestors. The Army argues that it intervenes to prevent more violence and instability. The armed forces also censor the Internet making it...protestors . The Thai public responded negatively to violent repression, as did many of Thailand’s allies in Europe, Asia and North America . In the wake...of expression, blocking and shutting down websites and radio stations, and censoring the Internet. In addition, the new government banned gatherings

  18. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kagie, Matthew J.; Lanterman, Aaron D.

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  19. Censored rainfall modelling for estimation of fine-scale extremes

    NASA Astrophysics Data System (ADS)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  20. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Mao, Lu; Lin, D. Y.

    2016-01-01

    Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  1. A robust semi-parametric warping estimator of the survivor function with an application to two-group comparisons

    PubMed Central

    Hutson, Alan D

    2018-01-01

    In this note, we develop a new and novel semi-parametric estimator of the survival curve that is comparable to the product-limit estimator under very relaxed assumptions. The estimator is based on a beta parametrization that warps the empirical distribution of the observed censored and uncensored data. The parameters are obtained using a pseudo-maximum likelihood approach adjusting the survival curve accounting for the censored observations. In the univariate setting, the new estimator tends to better extend the range of the survival estimation given a high degree of censoring. However, the key feature of this paper is that we develop a new two-group semi-parametric exact permutation test for comparing survival curves that is generally superior to the classic log-rank and Wilcoxon tests and provides the best global power across a variety of alternatives. The new test is readily extended to the k group setting. PMID:26988931

  2. Regression analysis of informative current status data with the additive hazards model.

    PubMed

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  3. Indications of negative evolution for the sources of the highest energy cosmic rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Andrew M.; Ahlers, Markus; Hooper, Dan

    2015-09-14

    Using recent measurements of the spectrum and chemical composition of the highest energy cosmic rays, we consider the sources of these particles. We find that these data strongly prefer models in which the sources of the ultra-high-energy cosmic rays inject predominantly intermediate mass nuclei, with comparatively few protons or heavy nuclei, such as iron or silicon. If the number density of sources per comoving volume does not evolve with redshift, the injected spectrum must be very hard (α≃1) in order to fit the spectrum observed from Earth. Such a hard spectral index would be surprising and difficult to accommodate theoretically.more » In contrast, much softer spectral indices, consistent with the predictions of Fermi acceleration (α≃2), are favored in models with negative source evolution. Furthermore with this theoretical bias, these observations thus favor models in which the sources of the highest energy cosmic rays are preferentially located within the low-redshift universe.« less

  4. Comparison of Methods for Analyzing Left-Censored Occupational Exposure Data

    PubMed Central

    Huynh, Tran; Ramachandran, Gurumurthy; Banerjee, Sudipto; Monteiro, Joao; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.

    2014-01-01

    The National Institute for Environmental Health Sciences (NIEHS) is conducting an epidemiologic study (GuLF STUDY) to investigate the health of the workers and volunteers who participated from April to December of 2010 in the response and cleanup of the oil release after the Deepwater Horizon explosion in the Gulf of Mexico. The exposure assessment component of the study involves analyzing thousands of personal monitoring measurements that were collected during this effort. A substantial portion of these data has values reported by the analytic laboratories to be below the limits of detection (LOD). A simulation study was conducted to evaluate three established methods for analyzing data with censored observations to estimate the arithmetic mean (AM), geometric mean (GM), geometric standard deviation (GSD), and the 95th percentile (X0.95) of the exposure distribution: the maximum likelihood (ML) estimation, the β-substitution, and the Kaplan–Meier (K-M) methods. Each method was challenged with computer-generated exposure datasets drawn from lognormal and mixed lognormal distributions with sample sizes (N) varying from 5 to 100, GSDs ranging from 2 to 5, and censoring levels ranging from 10 to 90%, with single and multiple LODs. Using relative bias and relative root mean squared error (rMSE) as the evaluation metrics, the β-substitution method generally performed as well or better than the ML and K-M methods in most simulated lognormal and mixed lognormal distribution conditions. The ML method was suitable for large sample sizes (N ≥ 30) up to 80% censoring for lognormal distributions with small variability (GSD = 2–3). The K-M method generally provided accurate estimates of the AM when the censoring was <50% for lognormal and mixed distributions. The accuracy and precision of all methods decreased under high variability (GSD = 4 and 5) and small to moderate sample sizes (N < 20) but the β-substitution was still the best of the three methods. When using the ML method, practitioners are cautioned to be aware of different ways of estimating the AM as they could lead to biased interpretation. A limitation of the β-substitution method is the absence of a confidence interval for the estimate. More research is needed to develop methods that could improve the estimation accuracy for small sample sizes and high percent censored data and also provide uncertainty intervals. PMID:25261453

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attallah, R., E-mail: reda.attallah@univ-annaba.dz

    High-energy cosmic-ray electrons reveal some remarkable spectral features, the most noteworthy of which is the rise in the positron fraction above 10 GeV. Due to strong energy loss during propagation, these particles can reach Earth only from nearby sources. Yet, the exact nature of these sources, which most likely manifest themselves in the observed anomalies, remains elusive. The many explanations put forward to resolve this case range from standard astrophysics to exotic physics. In this paper, we discuss the possible astrophysical origin of high-energy cosmic-ray electrons through a fully three-dimensional time-dependent Monte Carlo simulation. This approach, which takes advantage ofmore » the intrinsic random nature of cosmic-ray diffusive propagation, provides valuable information on the electron-by-electron fluctuations, making it particularly suitable for analyzing in depth the single-source scenario.« less

  6. CMB temperature trispectrum of cosmic strings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hindmarsh, Mark; Ringeval, Christophe; Suyama, Teruaki

    2010-03-15

    We provide an analytical expression for the trispectrum of the cosmic microwave background (CMB) temperature anisotropies induced by cosmic strings. Our result is derived for the small angular scales under the assumption that the temperature anisotropy is induced by the Gott-Kaiser-Stebbins effect. The trispectrum is predicted to decay with a noninteger power-law exponent l{sup -{rho}}with 6<{rho}<7, depending on the string microstructure, and thus on the string model. For Nambu-Goto strings, this exponent is related to the string mean square velocity and the loop distribution function. We then explore two classes of wave number configuration in Fourier space, the kite andmore » trapezium quadrilaterals. The trispectrum can be of any sign and appears to be strongly enhanced for all squeezed quadrilaterals.« less

  7. Cosmic microwave background power asymmetry from non-Gaussian modulation.

    PubMed

    Schmidt, Fabian; Hui, Lam

    2013-01-04

    Non-Gaussianity in the inflationary perturbations can couple observable scales to modes of much longer wavelength (even superhorizon), leaving as a signature a large-angle modulation of the observed cosmic microwave background power spectrum. This provides an alternative origin for a power asymmetry that is otherwise often ascribed to a breaking of statistical isotropy. The non-Gaussian modulation effect can be significant even for typical ~10(-5) perturbations while respecting current constraints on non-Gaussianity if the squeezed limit of the bispectrum is sufficiently infrared divergent. Just such a strongly infrared-divergent bispectrum has been claimed for inflation models with a non-Bunch-Davies initial state, for instance. Upper limits on the observed cosmic microwave background power asymmetry place stringent constraints on the duration of inflation in such models.

  8. Diagnosis of an intense atmospheric river impacting the pacific northwest: Storm summary and offshore vertical structure observed with COSMIC satellite retrievals

    USGS Publications Warehouse

    Neiman, P.J.; Ralph, F.M.; Wick, G.A.; Kuo, Y.-H.; Wee, T.-K.; Ma, Z.; Taylor, G.H.; Dettinger, M.D.

    2008-01-01

    This study uses the new satellite-based Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) mission to retrieve tropospheric profiles of temperature and moisture over the data-sparse eastern Pacific Ocean. The COSMIC retrievals, which employ a global positioning system radio occultation technique combined with "first-guess" information from numerical weather prediction model analyses, are evaluated through the diagnosis of an intense atmospheric river (AR; i.e., a narrow plume of strong water vapor flux) that devastated the Pacific Northwest with flooding rains in early November 2006. A detailed analysis of this AR is presented first using conventional datasets and highlights the fact that ARs are critical contributors to West Coast extreme precipitation and flooding events. Then, the COSMIC evaluation is provided. Offshore composite COSMIC soundings north of, within, and south of this AR exhibited vertical structures that are meteorologically consistent with satellite imagery and global reanalysis fields of this case and with earlier composite dropsonde results from other landfalling ARs. Also, a curtain of 12 offshore COSMIC soundings through the AR yielded cross-sectional thermodynamic and moisture structures that were similarly consistent, including details comparable to earlier aircraft-based dropsonde analyses. The results show that the new COSMIC retrievals, which are global (currently yielding ???2000 soundings per day), provide high-resolution vertical-profile information beyond that found in the numerical model first-guess fields and can help monitor key lower-tropospheric mesoscale phenomena in data-sparse regions. Hence, COSMIC will likely support a wide array of applications, from physical process studies to data assimilation, numerical weather prediction, and climate research. ?? 2008 American Meteorological Society.

  9. Should we use standard survival models or the illness-death model for interval-censored data to investigate risk factors of chronic kidney disease progression?

    PubMed

    Boucquemont, Julie; Metzger, Marie; Combe, Christian; Stengel, Bénédicte; Leffondre, Karen

    2014-01-01

    In studies investigating risk factors of chronic kidney disease (CKD) progression, one may be interested in estimating factors effects on both a fall of glomerular filtration rate (GFR) below a specific level (i.e., a CKD stage) and death. Such studies have to account for the fact that GFR is measured at intermittent visit only, which implies that progression to the stage of interest is unknown for patients who die before being observed at that stage. Our objective was to compare the results of an illness-death model that handles this uncertainty, with frequently used survival models. This study included 1,519 patients from the NephroTest cohort with CKD stages 1-4 at baseline (69% males, 59±15 years, median protein/creatinine ratio [PCR] 27.4 mg/mmol) and subsequent annual measures of GFR (follow-up time 4.3±2.7 years). Each model was used to estimate the effects of sex, age, PCR, and GFR at baseline on the hazards of progression to CKD stage 5 (GFR<15 mL/min/1.73 m2, n = 282 observed) and death (n = 168). For progression to stage 5, there were only minor differences between results from the different models. The differences between results were higher for the hazard of death before or after progression. Our results also suggest that previous findings on the effect of age on end-stage renal disease are more likely due to a strong impact of age on death than to an effect on progression. The probabilities of progression were systematically under-estimated with the survival model as compared with the illness-death model. This study illustrates the advantages of the illness-death model for accurately estimating the effects of risk factors on the hazard of progression and death, and probabilities of progression. It avoids the need to choose arbitrary time-to-event and time-to-censoring, while accounting for both interval censoring and competition by death, using a single analytical model.

  10. Risk of Febrile Neutropenia Associated With Select Myelosuppressive Chemotherapy Regimens in a Large Community-Based Oncology Practice.

    PubMed

    Li, Yanli; Family, Leila; Yang, Su-Jau; Klippel, Zandra; Page, John H; Chao, Chun

    2017-09-01

    Background: NCCN has classified commonly used chemotherapy regimens into high (>20%), intermediate (10%-20%), or low (<10%) febrile neutropenia (FN) risk categories based primarily on clinical trial evidence. Many chemotherapy regimens, however, remain unclassified by NCCN or lack FN incidence data in real-world clinical practice. Patients and Methods: We evaluated incidence proportions of FN and grade 4 and 3/4 neutropenia during the first chemotherapy course among patients from Kaiser Permanente Southern California who received selected chemotherapy regimens without well-established FN risk. Patients given granulocyte colony-stimulating factor (G-CSF) prophylaxis were excluded. Sensitivity analyses were performed to account for FN misclassification and censoring. Results: From 2008 to 2013, 1,312 patients with breast cancer who received docetaxel and cyclophosphamide (TC; n=853) or docetaxel, carboplatin, and trastuzumab (TCH; n=459); 1,321 patients with colorectal cancer who received capecitabine and oxaliplatin (XELOX; n=401) or leucovorin, 5-fluorouracil, and oxaliplatin (FOLFOX6; n=920); 307 patients with non-Hodgkin's lymphoma who received bendamustine with or without rituximab; and 181 patients with multiple myeloma who received lenalidomide with or without dexamethasone were included. Crude FN risk was >20% for both breast cancer regimens (TC and TCH). Crude FN risks for XELOX, FOLFOX6, bendamustine, and lenalidomide were <10%; however, when potential FN misclassification and censoring were considered, FN risks were >10%. Conclusions: Our results support published literature highlighting the real-world, "high" FN risk of the TC and TCH regimens for breast cancer. There is strong suggestive evidence that FN risks for XELOX, FOLFOX6, bendamustine, and lenalidomide are >10%. Calculation of chemotherapy course-level FN incidence without controlling for differential censoring for patients who discontinued regimens early, or possible FN misclassification, might have resulted in bias toward an underestimation of the true FN risk. These findings help define FN risk of the selected regimens in the real-world setting and inform prophylactic G-CSF use. Copyright © 2017 by the National Comprehensive Cancer Network.

  11. Methodological issues underlying multiple decrement life table analysis.

    PubMed

    Mode, C J; Avery, R C; Littman, G S; Potter, R G

    1977-02-01

    In this paper, the actuarial method of multiple decrement life table analysis of censored, longitudinal data is examined. The discussion is organized in terms of the first segment of usage of an intrauterine device. Weaknesses of the actuarial approach are pointed out, and an alternative approach, based on the classical model of competing risks, is proposed. Finally, the actuarial and the alternative method of analyzing censored data are compared, using data from the Taichung Medical Study on Intrauterine Devices.

  12. Strategic Studies Quarterly. Volume 5, Number 1, Spring 2011

    DTIC Science & Technology

    2011-01-01

    2010). 17. The White House, National Space Policy of the United States of America (Washington: White House, 28 June 2010), 3. 18. John Oneal and Bruce... censors and vigilantes) model operating on many levels at once. In this model, China is expressing a long-standing concern for the stability and...Ansfield, “China’s Censors Tackle and Trip Over the Internet,” New York Times, 8 April 2010. 32. Ching Cheong, “Fighting the Digital War with the

  13. Median nitrate concentrations in groundwater in the New Jersey Highlands Region estimated using regression models and land-surface characteristics

    USGS Publications Warehouse

    Baker, Ronald J.; Chepiga, Mary M.; Cauller, Stephen J.

    2015-01-01

    The Kaplan-Meier method of estimating summary statistics from left-censored data was applied in order to include nondetects (left-censored data) in median nitrate-concentration calculations. Median concentrations also were determined using three alternative methods of handling nondetects. Treatment of the 23 percent of samples that were nondetects had little effect on estimated median nitrate concentrations because method detection limits were mostly less than median values.

  14. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  15. A Bayesian model for time-to-event data with informative censoring

    PubMed Central

    Kaciroti, Niko A.; Raghunathan, Trivellore E.; Taylor, Jeremy M. G.; Julius, Stevo

    2012-01-01

    Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval (tk−1,tk], conditional on being at risk at tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension. PMID:22223746

  16. A new semi-supervised learning model combined with Cox and SP-AFT models in cancer survival analysis.

    PubMed

    Chai, Hua; Li, Zi-Na; Meng, De-Yu; Xia, Liang-Yong; Liang, Yong

    2017-10-12

    Gene selection is an attractive and important task in cancer survival analysis. Most existing supervised learning methods can only use the labeled biological data, while the censored data (weakly labeled data) far more than the labeled data are ignored in model building. Trying to utilize such information in the censored data, a semi-supervised learning framework (Cox-AFT model) combined with Cox proportional hazard (Cox) and accelerated failure time (AFT) model was used in cancer research, which has better performance than the single Cox or AFT model. This method, however, is easily affected by noise. To alleviate this problem, in this paper we combine the Cox-AFT model with self-paced learning (SPL) method to more effectively employ the information in the censored data in a self-learning way. SPL is a kind of reliable and stable learning mechanism, which is recently proposed for simulating the human learning process to help the AFT model automatically identify and include samples of high confidence into training, minimizing interference from high noise. Utilizing the SPL method produces two direct advantages: (1) The utilization of censored data is further promoted; (2) the noise delivered to the model is greatly decreased. The experimental results demonstrate the effectiveness of the proposed model compared to the traditional Cox-AFT model.

  17. Probability distribution and statistical properties of spherically compensated cosmic regions in ΛCDM cosmology

    NASA Astrophysics Data System (ADS)

    Alimi, Jean-Michel; de Fromont, Paul

    2018-04-01

    The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.

  18. THE EFFECT OF INTERMITTENT GYRO-SCALE SLAB TURBULENCE ON PARALLEL AND PERPENDICULAR COSMIC-RAY TRANSPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Roux, J. A.

    Earlier work based on nonlinear guiding center (NLGC) theory suggested that perpendicular cosmic-ray transport is diffusive when cosmic rays encounter random three-dimensional magnetohydrodynamic turbulence dominated by uniform two-dimensional (2D) turbulence with a minor uniform slab turbulence component. In this approach large-scale perpendicular cosmic-ray transport is due to cosmic rays microscopically diffusing along the meandering magnetic field dominated by 2D turbulence because of gyroresonant interactions with slab turbulence. However, turbulence in the solar wind is intermittent and it has been suggested that intermittent turbulence might be responsible for the observation of 'dropout' events in solar energetic particle fluxes on small scales.more » In a previous paper le Roux et al. suggested, using NLGC theory as a basis, that if gyro-scale slab turbulence is intermittent, large-scale perpendicular cosmic-ray transport in weak uniform 2D turbulence will be superdiffusive or subdiffusive depending on the statistical characteristics of the intermittent slab turbulence. In this paper we expand and refine our previous work further by investigating how both parallel and perpendicular transport are affected by intermittent slab turbulence for weak as well as strong uniform 2D turbulence. The main new finding is that both parallel and perpendicular transport are the net effect of an interplay between diffusive and nondiffusive (superdiffusive or subdiffusive) transport effects as a consequence of this intermittency.« less

  19. Dark energy and the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Dodelson, S.; Knox, L.

    2000-01-01

    We find that current cosmic microwave background anisotropy data strongly constrain the mean spatial curvature of the Universe to be near zero, or, equivalently, the total energy density to be near critical-as predicted by inflation. This result is robust to editing of data sets, and variation of other cosmological parameters (totaling seven, including a cosmological constant). Other lines of argument indicate that the energy density of nonrelativistic matter is much less than critical. Together, these results are evidence, independent of supernovae data, for dark energy in the Universe.

  20. Dark energy and the cosmic microwave background radiation.

    PubMed

    Dodelson, S; Knox, L

    2000-04-17

    We find that current cosmic microwave background anisotropy data strongly constrain the mean spatial curvature of the Universe to be near zero, or, equivalently, the total energy density to be near critical-as predicted by inflation. This result is robust to editing of data sets, and variation of other cosmological parameters (totaling seven, including a cosmological constant). Other lines of argument indicate that the energy density of nonrelativistic matter is much less than critical. Together, these results are evidence, independent of supernovae data, for dark energy in the Universe.

  1. Bounds on isocurvature perturbations from cosmic microwave background and large scale structure data.

    PubMed

    Crotty, Patrick; García-Bellido, Juan; Lesgourgues, Julien; Riazuelo, Alain

    2003-10-24

    We obtain very stringent bounds on the possible cold dark matter, baryon, and neutrino isocurvature contributions to the primordial fluctuations in the Universe, using recent cosmic microwave background and large scale structure data. Neglecting the possible effects of spatial curvature, tensor perturbations, and reionization, we perform a Bayesian likelihood analysis with nine free parameters, and find that the amplitude of the isocurvature component cannot be larger than about 31% for the cold dark matter mode, 91% for the baryon mode, 76% for the neutrino density mode, and 60% for the neutrino velocity mode, at 2sigma, for uncorrelated models. For correlated adiabatic and isocurvature components, the fraction could be slightly larger. However, the cross-correlation coefficient is strongly constrained, and maximally correlated/anticorrelated models are disfavored. This puts strong bounds on the curvaton model.

  2. Deciphering the Dipole Anisotropy of Galactic Cosmic Rays.

    PubMed

    Ahlers, Markus

    2016-10-07

    Recent measurements of the dipole anisotropy in the arrival directions of Galactic cosmic rays (CRs) indicate a strong energy dependence of the dipole amplitude and phase in the TeV-PeV range. We argue here that these observations can be well understood within standard diffusion theory as a combined effect of (i) one or more local sources at Galactic longitude 120°≲l≲300° dominating the CR gradient below 0.1-0.3 PeV, (ii) the presence of a strong ordered magnetic field in our local environment, (iii) the relative motion of the solar system, and (iv) the limited reconstruction capabilities of ground-based observatories. We show that an excellent candidate of the local CR source responsible for the dipole anisotropy at 1-100 TeV is the Vela supernova remnant.

  3. The effect of hospital care on early survival after penetrating trauma.

    PubMed

    Clark, David E; Doolittle, Peter C; Winchell, Robert J; Betensky, Rebecca A

    2014-12-01

    The effectiveness of emergency medical interventions can be best evaluated using time-to-event statistical methods with time-varying covariates (TVC), but this approach is complicated by uncertainty about the actual times of death. We therefore sought to evaluate the effect of hospital intervention on mortality after penetrating trauma using a method that allowed for interval censoring of the precise times of death. Data on persons with penetrating trauma due to interpersonal assault were combined from the 2008 to 2010 National Trauma Data Bank (NTDB) and the 2004 to 2010 National Violent Death Reporting System (NVDRS). Cox and Weibull proportional hazards models for survival time (t SURV ) were estimated, with TVC assumed to have constant effects for specified time intervals following hospital arrival. The Weibull model was repeated with t SURV interval-censored to reflect uncertainty about the precise times of death, using an imputation method to accommodate interval censoring along with TVC. All models showed that mortality was increased by older age, female sex, firearm mechanism, and injuries involving the head/neck or trunk. Uncensored models showed a paradoxical increase in mortality associated with the first hour in a hospital. The interval-censored model showed that mortality was markedly reduced after admission to a hospital, with a hazard ratio (HR) of 0.68 (95% CI 0.63, 0.73) during the first 30 min declining to a HR of 0.01 after 120 min. Admission to a verified level I trauma center (compared to other hospitals in the NTDB) was associated with a further reduction in mortality, with a HR of 0.93 (95% CI 0.82, 0.97). Time-to-event models with TVC and interval censoring can be used to estimate the effect of hospital care on early mortality after penetrating trauma or other acute medical conditions and could potentially be used for interhospital comparisons.

  4. Adapting machine learning techniques to censored time-to-event health record data: A general-purpose approach using inverse probability of censoring weighting.

    PubMed

    Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J

    2016-06-01

    Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. The cosmic merger rate of stellar black hole binaries from the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Mapelli, Michela; Giacobbo, Nicola; Ripamonti, Emanuele; Spera, Mario

    2017-12-01

    The cosmic merger rate density of black hole binaries (BHBs) can give us an essential clue to constraining the formation channels of BHBs, in light of current and forthcoming gravitational wave detections. Following a Monte Carlo approach, we couple new population-synthesis models of BHBs with the Illustris cosmological simulation, to study the cosmic history of BHB mergers. We explore six population-synthesis models, varying the prescriptions for supernovae, common envelope and natal kicks. In most considered models, the cosmic BHB merger rate follows the same trend as the cosmic star formation rate. The normalization of the cosmic BHB merger rate strongly depends on the treatment of common envelope and on the distribution of natal kicks. We find that most BHBs merging within LIGO's instrumental horizon come from relatively metal-poor progenitors (<0.2 Z⊙). The total masses of merging BHBs span a large range of values, from ∼6 to ∼82 M⊙. In our fiducial model, merging BHBs consistent with GW150914, GW151226 and GW170104 represent ∼6, 3 and 12 per cent of all BHBs merging within the LIGO horizon, respectively. The heavy systems, like GW150914, come from metal-poor progenitors (<0.15 Z⊙). Most GW150914-like systems merging in the local Universe appear to have formed at high redshift, with a long delay time. In contrast, GW151226-like systems form and merge all the way through the cosmic history, from progenitors with a broad range of metallicities. Future detections will be crucial to put constraints on common envelope, on natal kicks, and on the BHB mass function.

  6. On the Slow time Geomagnetic field Modulation of Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Okpala, K. C.; Egbunu, F.

    2016-12-01

    Cosmic rays of galactic origin are modulated by both heliospheric and geomagnetic conditions. The mutual (and mutually exclusive) contribution of both heliospheric and geomagnetic conditions to galactic cosmic rays (GCR) modulation is still an open question. While the rapid-time association of the galactic cosmic ray variation with different heliophysical and geophysical phenomena has been well studied, not so much attention has been paid to slow-time variations especially with regards to local effects. In this work, we employed monthly means of cosmic ray count rates from two mid latitude (Hermanus and Rome), and two higher latitude (Inuvik and Oulu) neutron monitors (NM), and compared their variability with geomagnetic stations that are in close proximity to the NMs. The data spans 1966 to 2008 and covers four (4) solar cycles. The difference (CRdiff)between the mean count rate of all days and the mean of the five quietest days for each month was compared with the Dst-related disturbance (Hdiff) derived from the nearby geomagnetic stations. Zeroth- and First- correlation between the cosmic ray parameters and geomagnetic parameters was performed to ascertain statistical association and test for spurious association. Our results show that solar activity is generally strongly correlated (>0.75) with mean strength of GCR count rate and geomagnetic field during individual solar cycles. The correlation between mean strength of cosmic ray intensity and Geomagnetic field strength is spurious and is basically moderated by the solar activity. The signature of convection driven disturbances at high latitude geomagnetic stations was evident during the declining phase of the solar cycles close to the solar minimums. The absence of this feature in the slow-time varying cosmic ray count rates in all stations, and in the mid latitude geomagnetic stations suggest that the local geomagnetic disturbance do not play a significant role in modulating the cosmic ray flux.

  7. Cosmic ray variations of solar origin in relation to human physiological state during the December 2006 solar extreme events

    NASA Astrophysics Data System (ADS)

    Papailiou, M.; Mavromichalaki, H.; Vassilaki, A.; Kelesidis, K. M.; Mertzanos, G. A.; Petropoulos, B.

    2009-02-01

    There is an increasing amount of evidence linking biological effects to solar and geomagnetic disturbances. A series of studies is published referring to the changes in human physiological responses at different levels of geomagnetic activity. In this study, the possible relation between the daily variations of cosmic ray intensity, measured by the Neutron Monitor at the Cosmic Ray Station of the University of Athens (http://cosray.phys.uoa.gr) and the average daily and hourly heart rate variations of persons, with no symptoms or hospital admission, monitored by Holter electrocardiogram, is considered. This work refers to a group of persons admitted to the cardiological clinic of the KAT Hospital in Athens during the time period from 4th to 24th December 2006 that is characterized by extreme solar and geomagnetic activity. A series of Forbush decreases started on 6th December and lasted until the end of the month and a great solar proton event causing a Ground Level Enhancement (GLE) of the cosmic ray intensity on 13th December occurred. A sudden decrease of the cosmic ray intensity on 15th December, when a geomagnetic storm was registered, was also recorded in Athens Neutron Monitor station (cut-off rigidity 8.53 GV) with amplitude of 4%. It is noticed that during geomagnetically quiet days the heart rate and the cosmic ray intensity variations are positively correlated. When intense cosmic ray variations, like Forbush decreases and relativistic proton events produced by strong solar phenomena occur, cosmic ray intensity and heart rate get minimum values and their variations, also, coincide. During these events the correlation coefficient of these two parameters changes and follows the behavior of the cosmic ray intensity variations. This is only a small part of an extended investigation, which has begun using data from the year 2002 and is still in progress.

  8. MEDIAN-BASED INCREMENTAL COST-EFFECTIVENESS RATIOS WITH CENSORED DATA

    PubMed Central

    Bang, Heejung; Zhao, Hongwei

    2016-01-01

    Cost-effectiveness is an essential part of treatment evaluation, in addition to effectiveness. In the cost-effectiveness analysis, a measure called the incremental cost-effectiveness ratio (ICER) is widely utilized, and the mean cost and the mean (quality-adjusted) life years have served as norms to summarize cost and effectiveness for a study population. Recently, the median-based ICER was proposed for complementary or sensitivity analysis purposes. In this paper, we extend this method when some data are censored. PMID:26010599

  9. Prevalence Incidence Mixture Models

    Cancer.gov

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  10. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  11. Future Experiments in Astrophysics

    NASA Technical Reports Server (NTRS)

    Krizmanic, John F.

    2002-01-01

    The measurement methodologies of astrophysics experiments reflect the enormous variation of the astrophysical radiation itself. The diverse nature of the astrophysical radiation, e.g. cosmic rays, electromagnetic radiation, and neutrinos, is further complicated by the enormous span in energy, from the 1.95 Kappa relic neutrino background to cosmic rays with energy greater than 10(exp 20)eV. The measurement of gravity waves and search for dark matter constituents are also of astrophysical interest. Thus, the experimental techniques employed to determine the energy of the incident particles are strongly dependent upon the specific particles and energy range to be measured. This paper summarizes some of the calorimetric methodologies and measurements planned by future astrophysics experiments. A focus will be placed on the measurement of higher energy astrophysical radiation. Specifically, future cosmic ray, gamma ray, and neutrino experiments will be discussed.

  12. An update on the correlation between the cosmic radiation intensity and the geomagnetic AA index

    NASA Technical Reports Server (NTRS)

    Shea, M. A.; Smart, D. F.

    1985-01-01

    A statistical study between the cosmic ray intensity, as observed by a neutron monitor, and of the geomagnetic aa index, as representative of perturbations in the plasma and interplanetary magnetic field in the heliosphere, has been updated to specifically exclude time periods around the reversal of the solar magnetic field. The results of this study show a strong negative correlation for the period 1960 through 1968 with a correlation coefficient of approximately -0.86. However, there is essentially no correlation between the cosmic ray intensity and the aa index for the period 1972-1979 (i.e. correlation coefficient less than 0.16). These results would appear to support the theory of preferential particle propagation into the heliosphere vis the ecliptic during the period 1960-1968 and via the solar polar regions during 1972-1979.

  13. The galactic gamma-ray distribution: Implications for galactic structure and the radial cosmic ray gradient

    NASA Technical Reports Server (NTRS)

    Harding, A. K.; Stecker, F. W.

    1984-01-01

    The radial distribution of gamma ray emissivity in the Galaxy was derived from flux longitude profiles, using both the final SAS-2 results and the recently corrected COS-B results and analyzing the northern and southern galactic regions separately. The recent CO surveys of the Southern Hemisphere, were used in conjunction with the Northern Hemisphere data, to derive the radial distribution of cosmic rays on both sides of the galactic plane. In addition to the 5 kpc ring, there is evidence from the radial asymmetry for spiral features which are consistent with those derived from the distribution of bright HII regions. Positive evidence was also found for a strong increase in the cosmic ray flux in the inner Galaxy, particularly in the 5 kpc region in both halves of the plane.

  14. Inverse probability weighting to control confounding in an illness-death model for interval-censored data.

    PubMed

    Gillaizeau, Florence; Sénage, Thomas; Le Borgne, Florent; Le Tourneau, Thierry; Roussel, Jean-Christian; Leffondrè, Karen; Porcher, Raphaël; Giraudeau, Bruno; Dantan, Etienne; Foucher, Yohann

    2018-04-15

    Multistate models with interval-censored data, such as the illness-death model, are still not used to any considerable extent in medical research regardless of the significant literature demonstrating their advantages compared to usual survival models. Possible explanations are their uncommon availability in classical statistical software or, when they are available, by the limitations related to multivariable modelling to take confounding into consideration. In this paper, we propose a strategy based on propensity scores that allows population causal effects to be estimated: the inverse probability weighting in the illness semi-Markov model with interval-censored data. Using simulated data, we validated the performances of the proposed approach. We also illustrated the usefulness of the method by an application aiming to evaluate the relationship between the inadequate size of an aortic bioprosthesis and its degeneration or/and patient death. We have updated the R package multistate to facilitate the future use of this method. Copyright © 2017 John Wiley & Sons, Ltd.

  15. How to Deal with Interval-Censored Data Practically while Assessing the Progression-Free Survival: A Step-by-Step Guide Using SAS and R Software.

    PubMed

    Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn

    2016-12-01

    We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  16. Conditional power and predictive power based on right censored data with supplementary auxiliary information.

    PubMed

    Sun, Libo; Wan, Ying

    2018-04-22

    Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Quantifying and estimating the predictive accuracy for censored time-to-event data with competing risks.

    PubMed

    Wu, Cai; Li, Liang

    2018-05-15

    This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk

    PubMed Central

    Wei, Shaoceng; Kryscio, Richard J.

    2015-01-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk (Figure 1). Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript we apply a Semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. PMID:24821001

  19. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk.

    PubMed

    Wei, Shaoceng; Kryscio, Richard J

    2016-12-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.

  20. Multivariate longitudinal data analysis with censored and intermittent missing responses.

    PubMed

    Lin, Tsung-I; Lachos, Victor H; Wang, Wan-Lun

    2018-05-08

    The multivariate linear mixed model (MLMM) has emerged as an important analytical tool for longitudinal data with multiple outcomes. However, the analysis of multivariate longitudinal data could be complicated by the presence of censored measurements because of a detection limit of the assay in combination with unavoidable missing values arising when subjects miss some of their scheduled visits intermittently. This paper presents a generalization of the MLMM approach, called the MLMM-CM, for a joint analysis of the multivariate longitudinal data with censored and intermittent missing responses. A computationally feasible expectation maximization-based procedure is developed to carry out maximum likelihood estimation within the MLMM-CM framework. Moreover, the asymptotic standard errors of fixed effects are explicitly obtained via the information-based method. We illustrate our methodology by using simulated data and a case study from an AIDS clinical trial. Experimental results reveal that the proposed method is able to provide more satisfactory performance as compared with the traditional MLMM approach. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, Timothy A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.

  2. THE LOCAL GROUP IN THE COSMIC WEB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forero-Romero, J. E.; González, R., E-mail: je.forero@uniandes.edu.co, E-mail: regonzar@astro.puc.cl

    We explore the characteristics of the cosmic web around Local-Group (LG)-like pairs using a cosmological simulation in the ΛCDM cosmology. We use the Hessian of the gravitational potential to classify regions on scales of ∼2 Mpc as a peak, sheet, filament, or void. The sample of LG counterparts is represented by two samples of halo pairs. The first is a general sample composed of pairs with similar masses and isolation criteria as observed for the LG. The second is a subset with additional observed kinematic constraints such as relative pair velocity and separation. We find that the pairs in the LGmore » sample with all constraints are: (1) preferentially located in filaments and sheets, (2) located in a narrow range of local overdensity 0 < δ < 2, web ellipticity 0.1 < e < 1.0, and prolateness –0.4 < p < 0.4, (3) strongly aligned with the cosmic web. The alignments are such that the pair orbital angular momentum tends to be perpendicular to the smallest tidal eigenvector, e-hat {sub 3}, which lies along the filament direction or the sheet plane. A stronger alignment is present for the vector linking the two halos with the vector e-hat {sub 3}. Additionally, we fail to find a strong correlation between the spin of each halo in the pair with the cosmic web. All of these trends are expected to a great extent from the selection of LG total mass in the general sample. Applied to the observed LG, there is a potential conflict between the alignments of the different satellite planes and the numerical evidence for satellite accretion along filaments; the direction defined by e-hat {sub 3}. This highlights the relevance of achieving a precise characterization for the location of the LG in the cosmic web in the cosmological context provided by ΛCDM.« less

  3. Neutrino production in electromagnetic cascades: An extra component of cosmogenic neutrino at ultrahigh energies

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Liu, Ruo-Yu; Li, Zhuo; Dai, Zi-Gao

    2017-03-01

    Muon pairs can be produced in the annihilation of ultrahigh energy (UHE, E ≳1 018 eV ) photons with low energy cosmic background radiation in the intergalactic space, giving birth to neutrinos. Although the branching ratio of muon pair production is low, products of other channels, which are mainly electron/positron pairs, will probably transfer most of their energies into the new generated UHE photon in the subsequent interaction with the cosmic background radiation via Compton scattering in deep Klein-Nishina regime. The regeneration of these new UHE photons then provides a second chance to produce the muon pairs, enhancing the neutrino flux. We investigate the neutrino production in the propagation of UHE photons in the intergalactic space at different redshifts, considering various competing processes such as pair production, double pair production for UHE photons, and triplet production and synchrotron radiation for UHE electrons. Following the analytic method raised by Gould and Rephaeli, we firstly study the electromagnetic cascade initiated by an UHE photon, with paying particular attention to the leading particle in the cascade process. Regarding the least energetic outgoing particles as energy loss, we obtain the effective penetration length of the leading particle, as well as energy loss rate including the neutrino emission rate in the cascade process. Finally, we find that an extra component of UHE neutrinos will arise from the propagation of UHE cosmic rays due to the generated UHE photons and electron/positrons. However, the flux of this component is quite small, with a flux of at most 10% of that of the conventional cosmogenic neutrino at a few EeV, in the absence of a strong intergalactic magnetic field and a strong cosmic radio background. The precise contribution of extra component depends on several factors, e.g., cosmic radio background, intergalactic magnetic field, and the spectrum of proton, which are discussed in this work.

  4. Snow water equivalent measured with cosmic-ray neutrons: reviving a little known but highly successful field method

    NASA Astrophysics Data System (ADS)

    Desilets, D.

    2012-12-01

    Secondary cosmic-ray neutrons are attenuated strongly by water in either solid or liquid form, suggesting a method for measuring snow water equivalent that has several advantages over alternative technologies. The cosmic-ray attenuation method is passive, portable, highly adaptable, and operates over an exceptionally large range of snow pack thicknesses. But despite promising initial observations made in the 1970s, the technique today remains practically unknown to snow hydrologists. Side-by-side measurements performed over the past several years with a snow pillow and a submerged cosmic-ray probe demonstrate that the cosmic-ray attenuation method merits consideration for a wide range of applications—especially those where alternative methods are made problematic by dense vegetation, rough terrain, deep snowpack or a lack of vehicular access. During the snow-free season, the instrumentation can be used to monitor soil moisture, thus providing another widely sought field measurement. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, C.A., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

  5. Dancing in the dark: galactic properties trace spin swings along the cosmic web

    NASA Astrophysics Data System (ADS)

    Dubois, Y.; Pichon, C.; Welker, C.; Le Borgne, D.; Devriendt, J.; Laigle, C.; Codis, S.; Pogosyan, D.; Arnouts, S.; Benabed, K.; Bertin, E.; Blaizot, J.; Bouchet, F.; Cardoso, J.-F.; Colombi, S.; de Lapparent, V.; Desjacques, V.; Gavazzi, R.; Kassin, S.; Kimm, T.; McCracken, H.; Milliard, B.; Peirani, S.; Prunet, S.; Rouberol, S.; Silk, J.; Slyz, A.; Sousbie, T.; Teyssier, R.; Tresse, L.; Treyer, M.; Vibert, D.; Volonteri, M.

    2014-10-01

    A large-scale hydrodynamical cosmological simulation, Horizon-AGN, is used to investigate the alignment between the spin of galaxies and the cosmic filaments above redshift 1.2. The analysis of more than 150 000 galaxies per time step in the redshift range 1.2 < z < 1.8 with morphological diversity shows that the spin of low-mass blue galaxies is preferentially aligned with their neighbouring filaments, while high-mass red galaxies tend to have a perpendicular spin. The reorientation of the spin of massive galaxies is provided by galaxy mergers, which are significant in their mass build-up. We find that the stellar mass transition from alignment to misalignment happens around 3 × 1010 M⊙. Galaxies form in the vorticity-rich neighbourhood of filaments, and migrate towards the nodes of the cosmic web as they convert their orbital angular momentum into spin. The signature of this process can be traced to the properties of galaxies, as measured relative to the cosmic web. We argue that a strong source of feedback such as active galactic nuclei is mandatory to quench in situ star formation in massive galaxies and promote various morphologies. It allows mergers to play their key role by reducing post-merger gas inflows and, therefore, keeping spins misaligned with cosmic filaments.

  6. SOLAR COSMIC RAYS AND SOFT RADIATION OBSERVED AT 5,000,000 KILOMETERS FROM EARTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnoldy, R.L.; Hoffman, R.A.; Winckler, J.R.

    1960-09-01

    During the period Mar. 27 to Apr. 6, 1960, the integrating ionization chamber and Geiger counter in Pioneer V detected solar cosmic rays and some soft- radiation effects associated with a high level of solar activity. The space probe was 5 x 10/sup 6/ km from the earth, approximately in the plane of the ecliptic, and located somewhat behind the sunearth radius toward the sun. The solar activity was associated with McMath plage region 5615 and was characterized by numerous flares of all sizes, large loops and surge prominences, and strong emission over a wide range of frequencies. On Mar.more » 31 at 0800 UT, a severe geomagnetic storm began on earth accompanied by major earth-current disturbances, a complete blackout of the North Atlantic communications channel, and auroral displays. At the same time, a large Forbush decrease occurred in the galactic cosmic radiation. An intense series of balloon flights was conducted to record the counting-rate increases at high altitudes due to solar cosmic rays and auroral x rays. Explorer VII showed substantial changes in the radiation belts and detected the solar cosmic rays. The observations of Pioneer V are summarized and compared to the findings of Explorer VII for the same period. (B.O.G.)« less

  7. Costs of cervical cancer treatment: population-based estimates from Ontario

    PubMed Central

    Pendrith, C.; Thind, A.; Zaric, G.S.; Sarma, S.

    2016-01-01

    Objectives The objectives of the present study were to estimate the overall and specific medical care costs associated with cervical cancer in the first 5 years after diagnosis in Ontario. Methods Incident cases of invasive cervical cancer during 2007–2010 were identified from the Ontario Cancer Registry and linked to administrative databases held at the Institute for Clinical Evaluative Sciences. Mean costs in 2010 Canadian dollars were estimated using the arithmetic mean and estimators that adjust for censored data. Results Mean age of the patients in the study cohort (779 cases) was 49.3 years. The mean overall medical care cost was $39,187 [standard error (se): $1,327] in the 1st year after diagnosis. Costs in year 1 ranged from $34,648 (se: $1,275) for those who survived at least 1 year to $69,142 (se: $4,818) for those who died from cervical cancer within 1 year. At 5 years after diagnosis, the mean overall unadjusted cost was $63,131 (se: $3,131), and the cost adjusted for censoring was $68,745 (se: $2,963). Inpatient hospitalizations and cancer-related care were the two largest components of cancer treatment costs. Conclusions We found that the estimated mean costs that did not account for censoring were consistently undervalued, highlighting the importance of estimates based on censoring-adjusted costs in cervical cancer. Our results are reliable for estimating the economic burden of cervical cancer and the cost-effectiveness of cervical cancer prevention strategies. PMID:27122978

  8. Community drinking water quality monitoring data: utility for public health research and practice.

    PubMed

    Jones, Rachael M; Graber, Judith M; Anderson, Robert; Rockne, Karl; Turyk, Mary; Stayner, Leslie T

    2014-01-01

    Environmental Public Health Tracking (EPHT) tracks the occurrence and magnitude of environmental hazards and associated adverse health effects over time. The EPHT program has formally expanded its scope to include finished drinking water quality. Our objective was to describe the features, strengths, and limitations of using finished drinking water quality data from community water systems (CWSs) for EPHT applications, focusing on atrazine and nitrogen compounds in 8 Midwestern states. Water quality data were acquired after meeting with state partners and reviewed and merged for analysis. Data and the coding of variables, particularly with respect to censored results (nondetects), were not standardized between states. Monitoring frequency varied between CWSs and between atrazine and nitrates, but this was in line with regulatory requirements. Cumulative distributions of all contaminants were not the same in all states (Peto-Prentice test P < .001). Atrazine results were highly censored in all states (76.0%-99.3%); higher concentrations were associated with increased measurement frequency and surface water as the CWS source water type. Nitrate results showed substantial state-to-state variability in censoring (20.5%-100%) and in associations between concentrations and the CWS source water type. Statistical analyses of these data are challenging due to high rates of censoring and uncertainty about the appropriateness of parametric assumptions for time-series data. Although monitoring frequency was consistent with regulations, the magnitude of time gaps coupled with uncertainty about CWS service areas may limit linkage with health outcome data.

  9. Variable selection in a flexible parametric mixture cure model with interval-censored data.

    PubMed

    Scolas, Sylvie; El Ghouch, Anouar; Legrand, Catherine; Oulhaj, Abderrahim

    2016-03-30

    In standard survival analysis, it is generally assumed that every individual will experience someday the event of interest. However, this is not always the case, as some individuals may not be susceptible to this event. Also, in medical studies, it is frequent that patients come to scheduled interviews and that the time to the event is only known to occur between two visits. That is, the data are interval-censored with a cure fraction. Variable selection in such a setting is of outstanding interest. Covariates impacting the survival are not necessarily the same as those impacting the probability to experience the event. The objective of this paper is to develop a parametric but flexible statistical model to analyze data that are interval-censored and include a fraction of cured individuals when the number of potential covariates may be large. We use the parametric mixture cure model with an accelerated failure time regression model for the survival, along with the extended generalized gamma for the error term. To overcome the issue of non-stable and non-continuous variable selection procedures, we extend the adaptive LASSO to our model. By means of simulation studies, we show good performance of our method and discuss the behavior of estimates with varying cure and censoring proportion. Lastly, our proposed method is illustrated with a real dataset studying the time until conversion to mild cognitive impairment, a possible precursor of Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  10. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.

    PubMed

    Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N

    2016-01-01

    Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  11. Cosmic ray event in 994 C.E. recorded in radiocarbon from Danish oak

    NASA Astrophysics Data System (ADS)

    Fogtmann-Schulz, A.; Østbø, S. M.; Nielsen, S. G. B.; Olsen, J.; Karoff, C.; Knudsen, M. F.

    2017-08-01

    We present measurements of radiocarbon in annual tree rings from the time period 980-1006 Common Era (C.E.), hereby covering the cosmic ray event in 994 C.E. The new radiocarbon record from Danish oak is based on both earlywood and latewood fractions of the tree rings, which makes it possible to study seasonal variations in 14C production. The measurements show a rapid increase of ˜10‰ from 993 to 994 C.E. in latewood, followed by a modest decline and relatively high values over the ensuing ˜10 years. This rapid increase occurs from 994 to 995 C.E. in earlywood, suggesting that the cosmic ray event most likely occurred during the period between April and June 994 C.E. Our new record from Danish oak shows strong agreement with existing Δ14C records from Japan, thus supporting the hypothesis that the 994 C.E. cosmic ray event was uniform throughout the Northern Hemisphere and therefore can be used as an astrochronological tie point to anchor floating chronologies of ancient history.

  12. Emission line galaxies at high redshift and analogs of the sources of cosmic reionization

    NASA Astrophysics Data System (ADS)

    Schaerer, D.

    2017-11-01

    We present recent work on emission line galaxies at high redshift and searches for analogs of the sources of cosmic reionization at low redshift. The VIMOS Ultra-Deep Survey (VUDS) carried out at the VLT has assembled more than 7000 spectra of galaxies from z 1.5 to 6 allowing us to address a wide diversity of questions with statistically meaningful samples. From VUDS we have recently identified a sample of CIII] and CIV] emitters at z 2-4 whose properties we present and discuss here (cf. Nakajima et al. 2017; Le Fevre et al. 2017). These objects provide interesting insight into the C/O ratio at high-z, the nature and hardness of their ionizing source, the ionizing photon production, and others. Targeting compact strong emission line galaxies with high [OIII]/[OII] ratios with the COS spectrograph on-board HST, we have recently been able to find several relatively strong Lyman continuum emitters at z 0.3 (Izotov et al. 2016ab). We describe the physical properties of these unique, rare low-z sources, which are found to be comparable to those of typical z>6 galaxies and thus currently the best analogs for the sources of cosmic reionization (cf. Schaerer et al. 2016). We also briefly discuss open questions and future steps.

  13. Non-Gaussianity of the cosmic infrared background anisotropies - II. Predictions of the bispectrum and constraints forecast

    NASA Astrophysics Data System (ADS)

    Pénin, A.; Lacasa, F.; Aghanim, N.

    2014-03-01

    Using a full analytical computation of the bispectrum based on the halo model together with the halo occupation number, we derive the bispectrum of the cosmic infrared background (CIB) anisotropies that trace the clustering of dusty-star-forming galaxies. We focus our analysis on wavelengths in the far-infrared and the sub-millimeter typical of the Planck/HFI and Herschel/SPIRE instruments, 350, 550, 850 and 1380 μm. We explore the bispectrum behaviour as a function of several models of evolution of galaxies and show that it is strongly sensitive to that ingredient. Contrary to the power spectrum, the bispectrum, at the four wavelengths, seems dominated by low-redshift galaxies. Such a contribution can be hardly limited by applying low flux cuts. We also discuss the contributions of halo mass as a function of the redshift and the wavelength, recovering that each term is sensitive to a different mass range. Furthermore, we show that the CIB bispectrum is a strong contaminant of the cosmic microwave background bispectrum at 850 μm and higher. Finally, a Fisher analysis of the power spectrum, bispectrum alone and of the combination of both shows that degeneracies on the halo occupation distribution parameters are broken by including the bispectrum information, leading to tight constraints even when including foreground residuals.

  14. Orientation of cosmic web filaments with respect to the underlying velocity field

    NASA Astrophysics Data System (ADS)

    Tempel, E.; Libeskind, N. I.; Hoffman, Y.; Liivamägi, L. J.; Tamm, A.

    2014-01-01

    The large-scale structure of the Universe is characterized by a web-like structure made of voids, sheets, filaments and knots. The structure of this so-called cosmic web is dictated by the local velocity shear tensor. In particular, the local direction of a filament should be strongly aligned with hat{e}_3, the eigenvector associated with the smallest eigenvalue of the tensor. That conjecture is tested here on the basis of a cosmological simulation. The cosmic web delineated by the halo distribution is probed by a marked point process with interactions (the Bisous model), detecting filaments directly from the halo distribution (P-web). The detected P-web filaments are found to be strongly aligned with the local hat{e}_3: the alignment is within 30° for ˜80 per cent of the elements. This indicates that large-scale filaments defined purely from the distribution of haloes carry more than just morphological information, although the Bisous model does not make any prior assumption on the underlying shear tensor. The P-web filaments are also compared to the structure revealed from the velocity shear tensor itself (V-web). In the densest regions, the P- and V-web filaments overlap well (90 per cent), whereas in lower density regions, the P-web filaments preferentially mark sheets in the V-web.

  15. Heavy ion irradiation of crystalline water ice. Cosmic ray amorphisation cross-section and sputtering yield

    NASA Astrophysics Data System (ADS)

    Dartois, E.; Augé, B.; Boduch, P.; Brunetto, R.; Chabot, M.; Domaracka, A.; Ding, J. J.; Kamalou, O.; Lv, X. Y.; Rothard, H.; da Silveira, E. F.; Thomas, J. C.

    2015-04-01

    Context. Under cosmic irradiation, the interstellar water ice mantles evolve towards a compact amorphous state. Crystalline ice amorphisation was previously monitored mainly in the keV to hundreds of keV ion energies. Aims: We experimentally investigate heavy ion irradiation amorphisation of crystalline ice, at high energies closer to true cosmic rays, and explore the water-ice sputtering yield. Methods: We irradiated thin crystalline ice films with MeV to GeV swift ion beams, produced at the GANIL accelerator. The ice infrared spectral evolution as a function of fluence is monitored with in-situ infrared spectroscopy (induced amorphisation of the initial crystalline state into a compact amorphous phase). Results: The crystalline ice amorphisation cross-section is measured in the high electronic stopping-power range for different temperatures. At large fluence, the ice sputtering is measured on the infrared spectra, and the fitted sputtering-yield dependence, combined with previous measurements, is quadratic over three decades of electronic stopping power. Conclusions: The final state of cosmic ray irradiation for porous amorphous and crystalline ice, as monitored by infrared spectroscopy, is the same, but with a large difference in cross-section, hence in time scale in an astrophysical context. The cosmic ray water-ice sputtering rates compete with the UV photodesorption yields reported in the literature. The prevalence of direct cosmic ray sputtering over cosmic-ray induced photons photodesorption may be particularly true for ices strongly bonded to the ice mantles surfaces, such as hydrogen-bonded ice structures or more generally the so-called polar ices. Experiments performed at the Grand Accélérateur National d'Ions Lourds (GANIL) Caen, France. Part of this work has been financed by the French INSU-CNRS programme "Physique et Chimie du Milieu Interstellaire" (PCMI) and the ANR IGLIAS.

  16. Cosmic dosimetry using TLD aboard spacecrafts of the "Cosmos" series

    NASA Astrophysics Data System (ADS)

    Hübner, K.; Schmidt, P.; Fellinger, J.

    Thermoluminescent (TL) detectors were used for dosimetric investigations on the outer surface as well as inside Soviet spacecrafts of the "Cosmos" series. At the outer surface, ultrathin TL detectors, based on CaF 2-PTFE and LiF, were arranged in special stacks and exposed to unshielded cosmic radiation. The strong decrease of dose within a few mg/cm 2 demonstrates that weakly penetrating radiation is dominating in the radiation field under investigation. On the basis of glow curve analysis of LiF thermoluminescent detectors it could be shown, that the high doses are caused by electrons.

  17. Cosmic dosimetry using TLD aboard spacecrafts of the "Cosmos" series.

    PubMed

    Hubner, K; Schmidt, P; Fellinger, J

    1994-11-01

    Thermoluminescent (TL) detectors were used for dosimetric investigations on the outer surface as well as inside Soviet spacecrafts of the "Cosmos" series. At the outer surface, ultrathin TL detectors, based on CaF2-PTFE and LiF, were arranged in special stacks and exposed to unshielded cosmic radiation. The strong decrease of dose within a few mg/cm2 demonstrates that weakly penetrating radiation is dominating in the radiation field under investigation. On the basis of glow curve analysis of LiF thermoluminescent detectors it could be shown, that the high doses are caused by electrons.

  18. Anencephalus, drinking water, geomagnetism and cosmic radiation.

    PubMed

    Archer, V E

    1979-01-01

    The mortality rates from anencephalus from 1950-1969 in Canadian cities are shown to be strongly correlated with city growth rate and with horizontal geomagnetic flux, which is directly related to the intensity of cosmic radiation. They are also shown to have some association with the magnesium content of drinking water. Prior work with these data which showed associations with magnesium in drinking water, mean income, latitude and longitude was found to be inadequate because it dismissed the observed geographic associations as having little biological meaning, and because the important variables of geomagnetism and city growth rate were overlooked.

  19. Cosmic rays, solar activity, magnetic coupling, and lightning incidence

    NASA Technical Reports Server (NTRS)

    Ely, J. T. A.

    1984-01-01

    A theoretical model is presented and described that unifies the complex influence of several factors on spatial and temporal variation of lightning incidence. These factors include the cosmic radiation, solar activity, and coupling between geomagnetic and interplanetary (solar wind) magnetic fields. Atmospheric electrical conductivity in the 10 km region was shown to be the crucial parameter altered by these factors. The theory reconciles several large scale studies of lightning incidence previously misinterpreted or considered contradictory. The model predicts additional strong effects on variations in lightning incidence, but only small effects on the morphology and rate of thunderstorm development.

  20. Observation of the ankle and evidence for a high-energy break in the cosmic ray spectrum

    NASA Astrophysics Data System (ADS)

    Abbasi, R.; Abuzayyad, T.; Amman, J.; Archbold, G.; Atkins, R.; Bellido, J.; Belov, K.; Belz, J.; Benzvi, S.; Bergman, D.

    2005-07-01

    We have measured the cosmic ray spectrum at energies above $10^{17}$ eV using the two air fluorescence detectors of the High Resolution Fly's Eye experiment operating in monocular mode. We describe the detector, PMT and atmospheric calibrations, and the analysis techniques for the two detectors. We fit the spectrum to models describing galactic and extragalactic sources. Our measured spectrum gives an observation of a feature known as the ``ankle'' near $3\\times 10^{18}$ eV, and strong evidence for a suppression near $6\\times 10^{19}$ eV.

  1. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Field-Scale Soil Moisture Observations in Irrigated Agriculture Fields Using the Cosmic-ray Neutron Rover

    NASA Astrophysics Data System (ADS)

    Franz, T. E.; Avery, W. A.; Finkenbiner, C. E.; Wang, T.; Brocca, L.

    2014-12-01

    Approximately 40% of global food production comes from irrigated agriculture. With the increasing demand for food even greater pressures will be placed on water resources within these systems. In this work we aimed to characterize the spatial and temporal patterns of soil moisture at the field-scale (~500 m) using the newly developed cosmic-ray neutron rover near Waco, NE. Here we mapped soil moisture of 144 quarter section fields (a mix of maize, soybean, and natural areas) each week during the 2014 growing season (May to September). The 11 x11 km study domain also contained 3 stationary cosmic-ray neutron probes for independent validation of the rover surveys. Basic statistical analysis of the domain indicated a strong inverted parabolic relationship between the mean and variance of soil moisture. The relationship between the mean and higher order moments were not as strong. Geostatistical analysis indicated the range of the soil moisture semi-variogram was significantly shorter during periods of heavy irrigation as compared to non-irrigated periods. Scaling analysis indicated strong power law behavior between the variance of soil moisture and averaging area with minimal dependence of mean soil moisture on the slope of the power law function. Statistical relationships derived from the rover dataset offer a novel set of observations that will be useful in: 1) calibrating and validating land surface models, 2) calibrating and validating crop models, 3) soil moisture covariance estimates for statistical downscaling of remote sensing products such as SMOS and SMAP, and 4) provide center-pivot scale mean soil moisture data for optimal irrigation timing and volume amounts.

  3. Computing the electric field from extensive air showers using a realistic description of the atmosphere

    NASA Astrophysics Data System (ADS)

    Gaté, F.; Revenu, B.; García-Fernández, D.; Marin, V.; Dallier, R.; Escudié, A.; Martin, L.

    2018-03-01

    The composition of ultra-high energy cosmic rays is still poorly known and constitutes a very important topic in the field of high-energy astrophysics. Detection of ultra-high energy cosmic rays is carried out via the extensive air showers they create after interacting with the atmosphere constituents. The secondary electrons and positrons within the showers emit a detectable electric field in the kHz-GHz range. It is possible to use this radio signal for the estimation of the atmospheric depth of maximal development of the showers Xmax , with a good accuracy and a duty cycle close to 100%. This value of Xmax is strongly correlated to the nature of the primary cosmic ray that initiated the shower. We show in this paper the importance of using a realistic atmospheric model in order to correct for systematic errors that can prevent a correct and unbiased estimation of Xmax.

  4. Inverse Flux versus Pressure of Muons from Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Buitrago, D.; Armendariz, R.

    2017-12-01

    When an incoming cosmic ray proton or atom collides with particles in earth's atmosphere a shower of secondary muons is created. Cosmic ray muon flux was measured at the Queensborough Community College using a QuarkNet detector consisting of three stacked scintillator muon counters and a three-fold coincidence trigger. Data was recorded during a three-day period during a severe weather storm that occurred from March 13-17, 2017. A computer program was created in Python to read the muon flux rate and atmospheric pressure sensor readings from the detector's data acquisition board. The program converts the data from hexadecimal to decimal, re-bins the data in a more suitable format, creates and overlays plots of muon flux with atmospheric pressure. Results thus far show a strong correlation between muon flux and atmospheric pressure. More data analysis will be done to verify the above conclusion.

  5. Search of strangelets and “forward” physics on the collider

    NASA Astrophysics Data System (ADS)

    Kurepin, A. B.

    2016-01-01

    A new stage of the collider experiments at the maximum energy of protons and nuclei at the LHC may lead to the discovery of new phenomena, as well as to confirm the effects previously observed only at very high energies in cosmic rays. A specific program of the experiments is so-called “forward” physics, i.e. the study of low-angle processes. Of the most interesting phenomena can be noted the detection in cosmic rays events called Centauro, which could be explained as the strangelets production. Centauro represent events with small multiplicity and with a strong suppression of electromagnetic component. Since the energy of the beams at the collider and kinematic parameters of the forward detectors CASTOR (CMS), TOTEM, LHCf and the ADA and ADC (ALICE) are close to the parameters and energies of abnormal events in cosmic rays, it is possible to reproduce and investigate in details these events in the laboratory.

  6. COSMIC-RAY-MEDIATED FORMATION OF BENZENE ON THE SURFACE OF SATURN'S MOON TITAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Li; Zheng Weijun; Kaiser, Ralf I.

    2010-08-01

    The aromatic benzene molecule (C{sub 6}H{sub 6})-a central building block of polycyclic aromatic hydrocarbon molecules-is of crucial importance for the understanding of the organic chemistry of Saturn's largest moon, Titan. Here, we show via laboratory experiments and electronic structure calculations that the benzene molecule can be formed on Titan's surface in situ via non-equilibrium chemistry by cosmic-ray processing of low-temperature acetylene (C{sub 2}H{sub 2}) ices. The actual yield of benzene depends strongly on the surface coverage. We suggest that the cosmic-ray-mediated chemistry on Titan's surface could be the dominant source of benzene, i.e., a factor of at least two ordersmore » of magnitude higher compared to previously modeled precipitation rates, in those regions of the surface which have a high surface coverage of acetylene.« less

  7. Measurement of Cosmic-Ray TeV Electrons

    NASA Astrophysics Data System (ADS)

    Schubnell, Michael; Anderson, T.; Bower, C.; Coutu, S.; Gennaro, J.; Geske, M.; Mueller, D.; Musser, J.; Nutter, S.; Park, N.; Tarle, G.; Wakely, S.

    2011-09-01

    The Cosmic Ray Electron Synchrotron Telescope (CREST) high-altitude balloon experiment is a pathfinding effort to detect for the first time multi-TeV cosmic-ray electrons. At these energies distant sources will not contribute to the local electron spectrum due to the strong energy losses of the electrons and thus TeV observations will reflect the distribution and abundance of nearby acceleration sites. CREST will detect electrons indirectly by measuring the characteristic synchrotron photons generated in the Earth's magnetic field. The instrument consist of an array of 1024 BaF2 crystals viewed by photomultiplier tubes surrounded by a hermetic scintillator shield. Since the primary electron itself need not traverse the payload, an effective detection area is achieved that is several times the nominal 6.4 m2 instrument. CREST is scheduled to fly in a long duration circumpolar orbit over Antarctica during the 2011-12 season.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthonisen, Madeleine; Brandenberger, Robert; Laguë, Alex

    Cosmic string loops contain cusps which decay by emitting bursts of particles. A significant fraction of the released energy is in the form of photons. These photons are injected non-thermally and can hence cause spectral distortions of the Cosmic Microwave Background (CMB). Under the assumption that cusps are robust against gravitational back-reaction, we compute the fractional energy density released as photons in the redshift interval where such non-thermal photon injection causes CMB spectral distortions. Whereas current constraints on such spectral distortions are not strong enough to constrain the string tension, future missions such as the PIXIE experiment will be ablemore » to provide limits which rule out a range of string tensions between G μ ∼ 10{sup −15} and G μ ∼ 10{sup −12}, thus ruling out particle physics models yielding these kind of intermediate-scale cosmic strings.« less

  9. HAWC Observations Strongly Favor Pulsar Interpretations of the Cosmic-Ray Positron Excess

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooper, Dan; Cholis, Ilias; Linden, Tim

    Recent measurements of the Geminga and B0656+14 pulsars by the gamma-ray telescope HAWC (along with earlier measurements by Milagro) indicate that these objects generate significant fluxes of very high-energy electrons. In this paper, we use the very high-energy gamma-ray intensity and spectrum of these pulsars to calculate and constrain their expected contributions to the local cosmic-ray positron spectrum. Among models that are capable of reproducing the observed characteristics of the gamma-ray emission, we find that pulsars invariably produce a flux of high-energy positrons that is similar in spectrum and magnitude to the positron fraction measured by PAMELA and AMS-02. Inmore » light of this result, we conclude that it is very likely that pulsars provide the dominant contribution to the long perplexing cosmic-ray positron excess.« less

  10. Non-censored rib fracture data during frontal PMHS sled tests.

    PubMed

    Kemper, Andrew R; Beeman, Stephanie M; Porta, David J; Duma, Stefan M

    2016-09-01

    The purpose of this study was to obtain non-censored rib fracture data due to three-point belt loading during dynamic frontal post-mortem human surrogate (PMHS) sled tests. The PMHS responses were then compared to matched tests performed using the Hybrid-III 50(th) percentile male ATD. Matched dynamic frontal sled tests were performed on two male PMHSs, which were approximately 50(th) percentile height and weight, and the Hybrid-III 50(th) percentile male ATD. The sled pulse was designed to match the vehicle acceleration of a standard sedan during a FMVSS-208 40 kph test. Each subject was restrained with a 4 kN load limiting, driver-side, three-point seatbelt. A 59-channel chestband, aligned at the nipple line, was used to quantify the chest contour, anterior-posterior sternum deflection, and maximum anterior-posterior chest deflection for all test subjects. The internal sternum deflection of the ATD was quantified with the sternum potentiometer. For the PMHS tests, a total of 23 single-axis strain gages were attached to the bony structures of the thorax, including the ribs, sternum, and clavicle. In order to create a non-censored data set, the time history of each strain gage was analyzed to determine the timing of each rib fracture and corresponding timing of each AIS level (AIS = 1, 2, 3, etc.) with respect to chest deflection. Peak sternum deflection for PMHS 1 and PMHS 2 were 48.7 mm (19.0%) and 36.7 mm (12.2%), respectively. The peak sternum deflection for the ATD was 20.8 mm when measured by the chest potentiometer and 34.4 mm (12.0%) when measured by the chestband. Although the measured ATD sternum deflections were found to be well below the current thoracic injury criterion (63 mm) specified for the ATD in FMVSS-208, both PMHSs sustained AIS 3+ thoracic injuries. For all subjects, the maximum chest deflection measured by the chestband occurred to the right of the sternum and was found to be 83.0 mm (36.0%) for PMHS 1, 60.6 mm (23.9%) for PMHS 2, and 56.3 mm (20.0%) for the ATD. The non-censored rib fracture data in the current study (n = 2 PMHS) in conjunction with the non-censored rib fracture data from two previous table-top studies (n = 4 PMHS) show that AIS 3+ injury timing occurs prior to peak sternum compression, prior to peak maximum chest compression, and at lower compressions than might be suggested by current PMHS thoracic injury criteria developed using censored rib fracture data. In addition, the maximum chest deflection results showed a more reasonable correlation between deflection, rib fracture timing, and injury severity than sternum deflection. Overall, these data provide compelling empirical evidence that suggests a more conservative thoracic injury criterion could potentially be developed based on non-censored rib fracture data with additional testing performed over a wider range of subjects and loading conditions.

  11. Using the U.S. Geological Survey National Water Quality Laboratory LT-MDL to Evaluate and Analyze Data

    USGS Publications Warehouse

    Bonn, Bernadine A.

    2008-01-01

    A long-term method detection level (LT-MDL) and laboratory reporting level (LRL) are used by the U.S. Geological Survey?s National Water Quality Laboratory (NWQL) when reporting results from most chemical analyses of water samples. Changing to this method provided data users with additional information about their data and often resulted in more reported values in the low concentration range. Before this method was implemented, many of these values would have been censored. The use of the LT-MDL and LRL presents some challenges for the data user. Interpreting data in the low concentration range increases the need for adequate quality assurance because even small contamination or recovery problems can be relatively large compared to concentrations near the LT-MDL and LRL. In addition, the definition of the LT-MDL, as well as the inclusion of low values, can result in complex data sets with multiple censoring levels and reported values that are less than a censoring level. Improper interpretation or statistical manipulation of low-range results in these data sets can result in bias and incorrect conclusions. This document is designed to help data users use and interpret data reported with the LTMDL/ LRL method. The calculation and application of the LT-MDL and LRL are described. This document shows how to extract statistical information from the LT-MDL and LRL and how to use that information in USGS investigations, such as assessing the quality of field data, interpreting field data, and planning data collection for new projects. A set of 19 detailed examples are included in this document to help data users think about their data and properly interpret lowrange data without introducing bias. Although this document is not meant to be a comprehensive resource of statistical methods, several useful methods of analyzing censored data are demonstrated, including Regression on Order Statistics and Kaplan-Meier Estimation. These two statistical methods handle complex censored data sets without resorting to substitution, thereby avoiding a common source of bias and inaccuracy.

  12. Revealing W51C as a Cosmic-Ray source using Fermi-LAT data

    DOE PAGES

    Jogler, T.; Funk, S.

    2016-01-10

    Here, supernova remnants (SNRs) are commonly believed to be the primary sources of Galactic cosmic rays. Despite intensive study of the non-thermal emission of many SNRs the identification of the accelerated particle type relies heavily on assumptions of ambient-medium parameters that are only loosely constrained. Compelling evidence of hadronic acceleration can be provided by detecting a strong roll-off in the secondary γ-ray spectrum below themore » $${\\pi }^{0}$$ production threshold energy of about 135 MeV, the so called "pion bump." Here we use five years of Fermi-Large Area Telescope data to study the spectrum above 60 MeV of the middle-aged SNR W51C. A clear break in the power-law γ-ray spectrum at $${E}_{{\\rm{break}}}=290\\pm 20\\;{\\rm{MeV}}$$ is detected with $$9\\sigma $$ significance and we show that this break is most likely associated with the energy production threshold of $${\\pi }^{0}$$mesons. A high-energy break in the γ-ray spectrum at about 2.7 GeV is found with $$7.5\\sigma $$ significance. The spectral index at energies beyond this second break is $${{\\rm{\\Gamma }}}_{2}={2.52}_{-0.07}^{+0.06}$$ and closely matches the spectral index derived by the MAGIC Collaboration above 75 GeV. Therefore our analysis provides strong evidence to explain the γ-ray spectrum of W51C by a single particle population of protons with a momentum spectrum best described by a broken power law with break momentum $${p}_{{\\rm{break}}}\\sim 80\\;{\\rm{G}}{\\rm{e}}{\\rm{V}}/c.$$ W51C is the third middle-aged SNR that displays compelling evidence for cosmic-ray acceleration and thus strengthens the case of SNRs as the main source of Galactic cosmic rays.« less

  13. Spatiotemporal characterization of soil moisture fields in agricultural areas using cosmic-ray neutron probes and data fusion

    NASA Astrophysics Data System (ADS)

    Franz, Trenton; Wang, Tiejun

    2015-04-01

    Approximately 40% of global food production comes from irrigated agriculture. With the increasing demand for food even greater pressures will be placed on water resources within these systems. In this work we aimed to characterize the spatial and temporal patterns of soil moisture at the field-scale (~500 m) using the newly developed cosmic-ray neutron rover near Waco, NE USA. Here we mapped soil moisture of 144 quarter section fields (a mix of maize, soybean, and natural areas) each week during the 2014 growing season (May to September). The 12 by 12 km study domain also contained three stationary cosmic-ray neutron probes for independent validation of the rover surveys. Basic statistical analysis of the domain indicated a strong relationship between the mean and variance of soil moisture at several averaging scales. The relationships between the mean and higher order moments were not significant. Scaling analysis indicated strong power law behavior between the variance of soil moisture and averaging area with minimal dependence of mean soil moisture on the slope of the power law function. In addition, we combined the data from the three stationary cosmic-ray neutron probes and mobile surveys using linear regression to derive a daily soil moisture product at 1, 3, and 12 km spatial resolutions for the entire growing season. The statistical relationships derived from the rover dataset offer a novel set of observations that will be useful in: 1) calibrating and validating land surface models, 2) calibrating and validating crop models, 3) soil moisture covariance estimates for statistical downscaling of remote sensing products such as SMOS and SMAP, and 4) provide daily center-pivot scale mean soil moisture data for optimal irrigation timing and volume amounts.

  14. A TEST OF COSMOLOGICAL MODELS USING HIGH-z MEASUREMENTS OF H(z)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melia, Fulvio; McClintock, Thomas M., E-mail: fmelia@email.arizona.edu, E-mail: tmcclintock89@gmail.com

    2015-10-15

    The recently constructed Hubble diagram using a combined sample of SNLS and SDSS-II SNe Ia, and an application of the Alcock–Paczyński (AP) test using model-independent Baryon Acoustic Oscillation (BAO) data, have suggested that the principal constraint underlying the cosmic expansion is the total equation-of-state of the cosmic fluid, rather than that of its dark energy. These studies have focused on the critical redshift range (0 ≲ z ≲ 2) within which the transition from decelerated to accelerated expansion is thought to have occurred, and they suggest that the cosmic fluid has zero active mass, consistent with a constant expansion rate.more » The evident impact of this conclusion on cosmological theory calls for an independent confirmation. In this paper, we carry out this crucial one-on-one comparison between the R{sub h} = ct universe (a Friedmann–Robertson–Walker cosmology with zero active mass) and wCDM/ΛCDM, using the latest high-z measurements of H(z). Whereas the SNe Ia yield the integrated luminosity distance, while the AP diagnostic tests the geometry of the universe, the Hubble parameter directly samples the expansion rate itself. We find that the model-independent cosmic chronometer data prefer R{sub h} = ct over wCDM/ΛCDM with a Bayes Information Criterion likelihood of ∼95% versus only ∼5%, in strong support of the earlier SNe Ia and AP results. This contrasts with a recent analysis of H(z) data based solely on BAO measurements which, however, strongly depend on the assumed cosmology. We discuss why the latter approach is inappropriate for model comparisons, and emphasize again the need for truly model-independent observations to be used in cosmological tests.« less

  15. Cure rate model with interval censored data.

    PubMed

    Kim, Yang-Jin; Jhun, Myoungshic

    2008-01-15

    In cancer trials, a significant fraction of patients can be cured, that is, the disease is completely eliminated, so that it never recurs. In general, treatments are developed to both increase the patients' chances of being cured and prolong the survival time among non-cured patients. A cure rate model represents a combination of cure fraction and survival model, and can be applied to many clinical studies over several types of cancer. In this article, the cure rate model is considered in the interval censored data composed of two time points, which include the event time of interest. Interval censored data commonly occur in the studies of diseases that often progress without symptoms, requiring clinical evaluation for detection (Encyclopedia of Biostatistics. Wiley: New York, 1998; 2090-2095). In our study, an approximate likelihood approach suggested by Goetghebeur and Ryan (Biometrics 2000; 56:1139-1144) is used to derive the likelihood in interval censored data. In addition, a frailty model is introduced to characterize the association between the cure fraction and survival model. In particular, the positive association between the cure fraction and the survival time is incorporated by imposing a common normal frailty effect. The EM algorithm is used to estimate parameters and a multiple imputation based on the profile likelihood is adopted for variance estimation. The approach is applied to the smoking cessation study in which the event of interest is a smoking relapse and several covariates including an intensive care treatment are evaluated to be effective for both the occurrence of relapse and the non-smoking duration. Copyright (c) 2007 John Wiley & Sons, Ltd.

  16. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    PubMed Central

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  17. Semiparametric regression analysis of failure time data with dependent interval censoring.

    PubMed

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-09-20

    Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Analysis of survival in breast cancer patients by using different parametric models

    NASA Astrophysics Data System (ADS)

    Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti

    2017-09-01

    In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.

  19. Advances and Limitations of Atmospheric Boundary Layer Observations with GPS Occultation over Southeast Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Xie, F.; Wu, D. L.; Ao, C. O.; Mannucci, A. J.; Kursinski, E. R.

    2012-01-01

    The typical atmospheric boundary layer (ABL) over the southeast (SE) Pacific Ocean is featured with a strong temperature inversion and a sharp moisture gradient across the ABL top. The strong moisture and temperature gradients result in a sharp refractivity gradient that can be precisely detected by the Global Positioning System (GPS) radio occultation (RO) measurements. In this paper, the Constellation Observing System for Meteorology, Ionosphere & Climate (COSMIC) GPS RO soundings, radiosondes and the high-resolution ECMWF analysis over the SE Pacific are analyzed. COSMIC RO is able to detect a wide range of ABL height variations (1-2 kilometer) as observed from the radiosondes. However, the ECMWF analysis systematically underestimates the ABL heights. The sharp refractivity gradient at the ABL top frequently exceeds the critical refraction (e.g., -157 N-unit per kilometer) and becomes the so-called ducting condition, which results in a systematic RO refractivity bias (or called N-bias) inside the ABL. Simulation study based on radiosonde profiles reveals the magnitudes of the N-biases are vertical resolution dependent. The N-bias is also the primary cause of the systematically smaller refractivity gradient (rarely exceeding -110 N-unit per kilometer) at the ABL top from RO measurement. However, the N-bias seems not affect the ABL height detection. Instead, the very large RO bending angle and the sharp refractivity gradient due to ducting allow reliable detection of the ABL height from GPS RO. The seasonal mean climatology of ABL heights derived from a nine-month composite of COSMIC RO soundings over the SE Pacific reveals significant differences from the ECMWF analysis. Both show an increase of ABL height from the shallow stratocumulus near the coast to a much higher trade wind inversion further off the coast. However, COSMIC RO shows an overall deeper ABL and reveals different locations of the minimum and maximum ABL heights as compared to the ECMWF analysis. At low latitudes, despite the decreasing number of COSMIC RO soundings and the lower percentage of soundings that penetrate into the lowest 500-m above the mean-sea-level, there are small sampling errors in the mean ABL height climatology. The difference of ABL height climatology between COSMIC RO and ECMWF analysis over SE Pacific is significant and requires further studies.

  20. Observation of Anisotropy in the Arrival Directions of Galactic Cosmic Rays at Multiple Angular Scales with IceCube

    NASA Astrophysics Data System (ADS)

    Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Altmann, D.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; BenZvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brown, A. M.; Buitink, S.; Caballero-Mora, K. S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Daughhetee, J.; Davis, J. C.; De Clercq, C.; Demirörs, L.; Denger, T.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Gora, D.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Hajismail, A.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Heinen, D.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lünemann, J.; Madajczyk, B.; Madsen, J.; Majumdar, P.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Ono, M.; Panknin, S.; Paul, L.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, C. C.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schmidt, T.; Schönwald, A.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stössl, A.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Stür, M.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Zoll, M.; IceCube Collaboration

    2011-10-01

    Between 2009 May and 2010 May, the IceCube neutrino detector at the South Pole recorded 32 billion muons generated in air showers produced by cosmic rays with a median energy of 20 TeV. With a data set of this size, it is possible to probe the southern sky for per-mil anisotropy on all angular scales in the arrival direction distribution of cosmic rays. Applying a power spectrum analysis to the relative intensity map of the cosmic ray flux in the southern hemisphere, we show that the arrival direction distribution is not isotropic, but shows significant structure on several angular scales. In addition to previously reported large-scale structure in the form of a strong dipole and quadrupole, the data show small-scale structure on scales between 15° and 30°. The skymap exhibits several localized regions of significant excess and deficit in cosmic ray intensity. The relative intensity of the smaller-scale structures is about a factor of five weaker than that of the dipole and quadrupole structure. The most significant structure, an excess localized at (right ascension α = 122fdg4 and declination δ = -47fdg4), extends over at least 20° in right ascension and has a post-trials significance of 5.3σ. The origin of this anisotropy is still unknown.

  1. Renormalized Two-Fluid Hydrodynamics of Cosmic-Ray--modified Shocks

    NASA Astrophysics Data System (ADS)

    Malkov, M. A.; Voelk, H. J.

    1996-12-01

    A simple two-fluid model of diffusive shock acceleration, introduced by Axford, Leer, & Skadron and Drury & Völk, is revisited. This theory became a chief instrument in the studies of shock modification due to particle acceleration. Unfortunately its most intriguing steady state prediction about a significant enhancement of the shock compression and a corresponding increase of the cosmic-ray production violates assumptions which are critical for the derivation of this theory. In particular, for strong shocks the spectral flattening makes a cutoff-independent definition of pressure and energy density impossible and therefore causes an additional closure problem. Confining ourselves for simplicity to the case of plane shocks, assuming reacceleration of a preexisting cosmic-ray population, we argue that also under these circumstances the kinetic solution has a rather simple form. It can be characterized by only a few parameters, in the simplest case by the slope and the magnitude of the momentum distribution at the upper momentum cutoff. We relate these parameters to standard hydrodynamic quantities like the overall shock compression ratio and the downstream cosmic-ray pressure. The two-fluid theory produced in this way has the traditional form but renormalized closure parameters. By solving the renormalized Rankine-Hugoniot equations, we show that for the efficient stationary solution, most significant for cosmic-ray acceleration, the renormalization is needed in the whole parameter range of astrophysical interest.

  2. DAMPING OF ALFVÉN WAVES BY TURBULENCE AND ITS CONSEQUENCES: FROM COSMIC-RAY STREAMING TO LAUNCHING WINDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lazarian, A.

    2016-12-20

    This paper considers turbulent damping of Alfvén waves in magnetized plasmas. We identify two cases of damping, one related to damping of cosmic-ray streaming instability, the other related to damping of Alfvén waves emitted by a macroscopic wave source, e.g., a stellar atmosphere. The physical difference between the two cases is that in the former case the generated waves are emitted with respect to the local direction of the magnetic field, and in the latter, waves are emitted with respect to the mean field. The scaling of damping is different in the two cases. We explore effects of turbulence inmore » the regimes from sub-Alfvénic to super-Alfvénic to obtain analytical expressions for the damping rates and define the ranges of applicability of these expressions. In describing the damping of the streaming instability, we find that for sub-Alfvénic turbulence, the range of cosmic-ray energies influenced by weak turbulence is unproportionally large compared to the range of scales where weak turbulence is present. On the contrary, the range of cosmic-ray energies affected by strong Alfvénic turbulence is rather limited. A number of astrophysical applications of the process ranging from launching of stellar and galactic winds to propagation of cosmic rays in galaxies and clusters of galaxies is considered. In particular, we discuss how to reconcile the process of turbulent damping with the observed isotropy of the Milky Way cosmic rays.« less

  3. New fermionic dark matters, extended Standard Model and cosmic rays

    NASA Astrophysics Data System (ADS)

    Hwang, Jae-Kwang

    2017-08-01

    Three generations of leptons and quarks correspond to the lepton charges (LCs) in this work. Then, the leptons have the electric charges (ECs) and LCs. The quarks have the ECs, LCs and color charges (CCs). Three heavy leptons and three heavy quarks are introduced to make the missing third flavor of EC. Then the three new particles which have the ECs are proposed as the bastons (dark matters) with the rest masses of 26.121 eV/c2, 42.7 GeV/c2 and 1.9 × 1015 eV/c2. These new particles are applied to explain the origins of the astrophysical observations like the ultra-high energy cosmic rays and supernova 1987A anti-neutrino data. It is concluded that the 3.5 keV X-ray peak observed from the cosmic X-ray background spectra is originated not from the pair annihilations of the dark matters but from the X-ray emission of the Q1 baryon atoms which are similar in the atomic structure to the hydrogen atom. The presence of the 3.5 keV cosmic X-ray supports the presence of the Q1 quark with the EC of -4/3. New particles can be indirectly seen from the astrophysical observations like the cosmic ray and cosmic gamma ray. In this work, the systematic quantized charges of EC, LC and CC for the elementary particles are used to consistently explain the decay and reaction schemes of the elementary particles. Also, the strong, weak and dark matter forces are consistently explained.

  4. A method for analyzing clustered interval-censored data based on Cox's model.

    PubMed

    Kor, Chew-Teng; Cheng, Kuang-Fu; Chen, Yi-Hau

    2013-02-28

    Methods for analyzing interval-censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval-censored data. Our method is based on Cox's proportional hazard model with piecewise-constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family-based cohort study of pandemic H1N1 influenza in Taiwan during 2009-2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Rank-based estimation in the {ell}1-regularized partly linear model for censored outcomes with application to integrated analyses of clinical predictors and gene expression data.

    PubMed

    Johnson, Brent A

    2009-10-01

    We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.

  6. Improvement of Parameter Estimations in Tumor Growth Inhibition Models on Xenografted Animals: Handling Sacrifice Censoring and Error Caused by Experimental Measurement on Larger Tumor Sizes.

    PubMed

    Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie

    2016-09-01

    The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).

  7. Automatic Censoring CFAR Detector Based on Ordered Data Difference for Low-Flying Helicopter Safety

    PubMed Central

    Jiang, Wen; Huang, Yulin; Yang, Jianyu

    2016-01-01

    Being equipped with a millimeter-wave radar allows a low-flying helicopter to sense the surroundings in real time, which significantly increases its safety. However, nonhomogeneous clutter environments, such as a multiple target situation and a clutter edge environment, can dramatically affect the radar signal detection performance. In order to improve the radar signal detection performance in nonhomogeneous clutter environments, this paper proposes a new automatic censored cell averaging CFAR detector. The proposed CFAR detector does not require any prior information about the background environment and uses the hypothesis test of the first-order difference (FOD) result of ordered data to reject the unwanted samples in the reference window. After censoring the unwanted ranked cells, the remaining samples are combined to form an estimate of the background power level, thus getting better radar signal detection performance. The simulation results show that the FOD-CFAR detector provides low loss CFAR performance in a homogeneous environment and also performs robustly in nonhomogeneous environments. Furthermore, the measured results of a low-flying helicopter validate the basic performance of the proposed method. PMID:27399714

  8. Quantification of variability and uncertainty for air toxic emission inventories with censored emission factor data.

    PubMed

    Frey, H Christopher; Zhao, Yuchao

    2004-11-15

    Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.

  9. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  10. Modeling absolute differences in life expectancy with a censored skew-normal regression approach

    PubMed Central

    Clough-Gorr, Kerri; Zwahlen, Marcel

    2015-01-01

    Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544

  11. “Smooth” Semiparametric Regression Analysis for Arbitrarily Censored Time-to-Event Data

    PubMed Central

    Zhang, Min; Davidian, Marie

    2008-01-01

    Summary A general framework for regression analysis of time-to-event data subject to arbitrary patterns of censoring is proposed. The approach is relevant when the analyst is willing to assume that distributions governing model components that are ordinarily left unspecified in popular semiparametric regression models, such as the baseline hazard function in the proportional hazards model, have densities satisfying mild “smoothness” conditions. Densities are approximated by a truncated series expansion that, for fixed degree of truncation, results in a “parametric” representation, which makes likelihood-based inference coupled with adaptive choice of the degree of truncation, and hence flexibility of the model, computationally and conceptually straightforward with data subject to any pattern of censoring. The formulation allows popular models, such as the proportional hazards, proportional odds, and accelerated failure time models, to be placed in a common framework; provides a principled basis for choosing among them; and renders useful extensions of the models straightforward. The utility and performance of the methods are demonstrated via simulations and by application to data from time-to-event studies. PMID:17970813

  12. Parametric Model Based On Imputations Techniques for Partly Interval Censored Data

    NASA Astrophysics Data System (ADS)

    Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah

    2017-12-01

    The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.

  13. Ultrahigh-energy Cosmic Rays from Fanaroff Riley class II radio galaxies

    NASA Astrophysics Data System (ADS)

    Rachen, Joerg; Biermann, Peter L.

    1992-08-01

    The hot spots of very powerful radio galaxies (Fanaroff Riley class II) are argued to be the sources of the ultrahigh energy component in Cosmic Rays. We present calculations of Cosmic Ray transport in an evolving universe, taking the losses against the microwave background properly into account. As input we use the models for the cosmological radio source evolution derived by radioastronomers (mainly Peacock 1985). The model we adopt for the acceleration in the radio hot spots has been introduced by Biermann and Strittmatter (1987), and Meisenheimer et al. (1989) and is based on first order Fermi theory of particle acceleration at shocks (see, e.g., Drury 1983). As an unknown the actual proportion of energy density in protons enters, which together with structural uncertainties in the hot spots should introduce no more than one order of magnitude in uncertainty: We easily reproduce the observed spectra of high energy cosmic rays. It follows that scattering of charged energetic particles in intergalactic space must be sufficiently small in order to obtain contributions from sources as far away as even the nearest Fanaroff Riley class II radio galaxies. This implies a strong constraint on the turbulent magnetic field in intergalactic space.

  14. Interpreting the cosmic far-infrared background anisotropies using a gas regulator model

    NASA Astrophysics Data System (ADS)

    Wu, Hao-Yi; Doré, Olivier; Teyssier, Romain; Serra, Paolo

    2018-04-01

    Cosmic far-infrared background (CFIRB) is a powerful probe of the history of star formation rate (SFR) and the connection between baryons and dark matter across cosmic time. In this work, we explore to which extent the CFIRB anisotropies can be reproduced by a simple physical framework for galaxy evolution, the gas regulator (bathtub) model. This model is based on continuity equations for gas, stars, and metals, taking into account cosmic gas accretion, star formation, and gas ejection. We model the large-scale galaxy bias and small-scale shot noise self-consistently, and we constrain our model using the CFIRB power spectra measured by Planck. Because of the simplicity of the physical model, the goodness of fit is limited. We compare our model predictions with the observed correlation between CFIRB and gravitational lensing, bolometric infrared luminosity functions, and submillimetre source counts. The strong clustering of CFIRB indicates a large galaxy bias, which corresponds to haloes of mass 1012.5 M⊙ at z = 2, higher than the mass associated with the peak of the star formation efficiency. We also find that the far-infrared luminosities of haloes above 1012 M⊙ are higher than the expectation from the SFR observed in ultraviolet and optical surveys.

  15. Observation of Cosmic-Ray Anisotropy with the IceTop Air Shower Array

    NASA Astrophysics Data System (ADS)

    Aartsen, M. G.; Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Altmann, D.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Baum, V.; Bay, R.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker Tjus, J.; Becker, K.-H.; Bell, M.; Benabderrahmane, M. L.; BenZvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohaichuk, S.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brayeur, L.; Brown, A. M.; Bruijn, R.; Brunner, J.; Carson, M.; Casey, J.; Casier, M.; Chirkin, D.; Christy, B.; Clark, K.; Clevermann, F.; Cohen, S.; Cowen, D. F.; Cruz Silva, A. H.; Danninger, M.; Daughhetee, J.; Davis, J. C.; De Clercq, C.; De Ridder, S.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dreyer, J.; Dumm, J. P.; Dunkman, M.; Eagan, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Flis, S.; Franckowiak, A.; Franke, R.; Frantzen, K.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Goodman, J. A.; Góra, D.; Grant, D.; Gross, A.; Grullon, S.; Gurtner, M.; Ha, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Hanson, K.; Heereman, D.; Heimann, P.; Heinen, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Homeier, A.; Hoshina, K.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobi, E.; Jacobsen, J.; Japaridze, G. S.; Jlelati, O.; Kappes, A.; Karg, T.; Karle, A.; Kiryluk, J.; Kislat, F.; Kläs, J.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krasberg, M.; Kroll, G.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Landsman, H.; Larson, M. J.; Lauer, R.; Lesiak-Bzdak, M.; Lünemann, J.; Madsen, J.; Maruyama, R.; Mase, K.; Matis, H. S.; McNally, F.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Miarecki, S.; Middell, E.; Milke, N.; Miller, J.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Nowicki, S. C.; Nygren, D. R.; Obertacke, A.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Panknin, S.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pieloth, D.; Pirk, N.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Rädel, L.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Richman, M.; Riedel, B.; Rodrigues, J. P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Ruzybayev, B.; Ryckbosch, D.; Saba, S. M.; Salameh, T.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Scheel, M.; Scheriau, F.; Schmidt, T.; Schmitz, M.; Schoenen, S.; Schöneberg, S.; Schönherr, L.; Schönwald, A.; Schukraft, A.; Schulte, L.; Schulz, O.; Seckel, D.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Sheremata, C.; Smith, M. W. E.; Soiron, M.; Soldin, D.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stasik, A.; Stezelberger, T.; Stokstad, R. G.; Stössl, A.; Strahler, E. A.; Ström, R.; Sullivan, G. W.; Taavola, H.; Taboada, I.; Tamburro, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Usner, M.; van der Drift, D.; van Eijndhoven, N.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Wasserman, R.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, C.; Xu, D. L.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Ziemann, J.; Zierke, S.; Zilles, A.; Zoll, M.; IceCube Collaboration

    2013-03-01

    We report on the observation of anisotropy in the arrival direction distribution of cosmic rays at PeV energies. The analysis is based on data taken between 2009 and 2012 with the IceTop air shower array at the south pole. IceTop, an integral part of the IceCube detector, is sensitive to cosmic rays between 100 TeV and 1 EeV. With the current size of the IceTop data set, searches for anisotropy at the 10-3 level can, for the first time, be extended to PeV energies. We divide the data set into two parts with median energies of 400 TeV and 2 PeV, respectively. In the low energy band, we observe a strong deficit with an angular size of about 30° and an amplitude of (- 1.58 ± 0.46stat ± 0.52sys) × 10-3 at a location consistent with previous observations of cosmic rays with the IceCube neutrino detector. The study of the high energy band shows that the anisotropy persists to PeV energies and increases in amplitude to (- 3.11 ± 0.38stat ± 0.96sys) × 10-3.

  16. Estimation for coefficient of variation of an extension of the exponential distribution under type-II censoring scheme

    NASA Astrophysics Data System (ADS)

    Bakoban, Rana A.

    2017-08-01

    The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.

  17. Strong Stellar-driven Outflows Shape the Evolution of Galaxies at Cosmic Dawn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontanot, Fabio; De Lucia, Gabriella; Hirschmann, Michaela

    We study galaxy mass assembly and cosmic star formation rate (SFR) at high redshift (z ≳ 4), by comparing data from multiwavelength surveys with predictions from the GAlaxy Evolution and Assembly (gaea) model. gaea implements a stellar feedback scheme partially based on cosmological hydrodynamical simulations, which features strong stellar-driven outflows and mass-dependent timescales for the re-accretion of ejected gas. In previous work, we have shown that this scheme is able to correctly reproduce the evolution of the galaxy stellar mass function (GSMF) up to z ∼ 3. We contrast model predictions with both rest-frame ultraviolet (UV) and optical luminosity functionsmore » (LFs), which are mostly sensitive to the SFR and stellar mass, respectively. We show that gaea is able to reproduce the shape and redshift evolution of both sets of LFs. We study the impact of dust on the predicted LFs, and we find that the required level of dust attenuation is in qualitative agreement with recent estimates based on the UV continuum slope. The consistency between data and model predictions holds for the redshift evolution of the physical quantities well beyond the redshift range considered for the calibration of the original model. In particular, we show that gaea is able to recover the evolution of the GSMF up to z ∼ 7 and the cosmic SFR density up to z ∼ 10.« less

  18. On the correlation between the recent star formation rate in the Solar Neighbourhood and the glaciation period record on Earth

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, R.; de la Fuente Marcos, C.

    2004-11-01

    Shaviv [New Astron. 8 (2003) 39; J. Geophys. Res. 108 (2003) 3] has shown evidence for a correlation between variations in the Galactic cosmic ray flux reaching Earth and the glaciation period record on Earth during the last 2 Gyr. If the flux of cosmic rays is mainly the result of Type II supernovae, an additional correlation between the star formation history of the Solar Neighbourhood and the timing of past ice ages is expected. Higher star formation rate implies increased cosmic ray flux and this may translate into colder climate through a rise in the average low altitude cloud cover. Here we reanalyze the correlation between this star formation history and the glaciation period record on Earth using a volume limited open cluster sample. Numerical modeling and recent observational data indicate that the correlation is rather strong but only if open clusters within 1.5 kpc from the Sun are considered. Under this constraint, our statistical analysis not only suggests a strong correlation in the timing of the events (enhanced star formation and glaciation episodes), but also in the severity and length of the episodes. In particular, the snowball Earth scenario appears to be connected with the strongest episode of enhanced star formation recorded in the Solar Neighbourhood during the last 2 Gyr.

  19. Effects of Cutoffs on Galactic Cosmic-Ray Interactions in Solar-System Matter

    NASA Technical Reports Server (NTRS)

    Kim, K. J.; Reedy, R. C.; Masarik, J.

    2005-01-01

    The energetic particles in the galactic cosmic rays (GCR) induce many interactions in a variety of solar-system matter. Cosmogenic nuclides are used to study the histories of meteorites and lunar samples. Gamma rays and neutrons are used to map the compositions of planetary surfaces, such as Mars, the Moon, and asteroids. In almost all of these cases, the spectra of incident GCR particles are fairly similar, with only some modulation by the Sun over an 11-year cycle. Strong magnetic fields can seriously affect the energy spectrum of GCR particles hitting the surface of objects inside the magnetic fields. The Earth s geomagnetic field is strong enough that only GCR particles with magnetic rigidities above approx. 17 GV (a proton energy of approx. 17 GeV) reach the atmosphere over certain regions near the equator. This effect of removing lower-energy GCR particles is called a cutoff. The jovian magnetic fields are so strong that the fluxes of GCR particles hitting the 4 large Galilean satellites are similarly affected. The cutoff at Europa is estimated to be similar to or a little higher than at the Earth s equator.

  20. Bayesian dynamic regression models for interval censored survival data with application to children dental health.

    PubMed

    Wang, Xiaojing; Chen, Ming-Hui; Yan, Jun

    2013-07-01

    Cox models with time-varying coefficients offer great flexibility in capturing the temporal dynamics of covariate effects on event times, which could be hidden from a Cox proportional hazards model. Methodology development for varying coefficient Cox models, however, has been largely limited to right censored data; only limited work on interval censored data has been done. In most existing methods for varying coefficient models, analysts need to specify which covariate coefficients are time-varying and which are not at the time of fitting. We propose a dynamic Cox regression model for interval censored data in a Bayesian framework, where the coefficient curves are piecewise constant but the number of pieces and the jump points are covariate specific and estimated from the data. The model automatically determines the extent to which the temporal dynamics is needed for each covariate, resulting in smoother and more stable curve estimates. The posterior computation is carried out via an efficient reversible jump Markov chain Monte Carlo algorithm. Inference of each coefficient is based on an average of models with different number of pieces and jump points. A simulation study with three covariates, each with a coefficient of different degree in temporal dynamics, confirmed that the dynamic model is preferred to the existing time-varying model in terms of model comparison criteria through conditional predictive ordinate. When applied to a dental health data of children with age between 7 and 12 years, the dynamic model reveals that the relative risk of emergence of permanent tooth 24 between children with and without an infected primary predecessor is the highest at around age 7.5, and that it gradually reduces to one after age 11. These findings were not seen from the existing studies with Cox proportional hazards models.

  1. Statistical inference methods for two crossing survival curves: a comparison of methods.

    PubMed

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.

  2. Statistical Inference Methods for Two Crossing Survival Curves: A Comparison of Methods

    PubMed Central

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman’s smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér—von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman’s smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests. PMID:25615624

  3. Estimating the mean and standard deviation of environmental data with below detection limit observations: Considering highly skewed data and model misspecification.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin

    2015-11-01

    In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Substituting values for censored data from Texas, USA, reservoirs inflated and obscured trends in analyses commonly used for water quality target development.

    PubMed

    Grantz, Erin; Haggard, Brian; Scott, J Thad

    2018-06-12

    We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.

  5. Challenges in risk estimation using routinely collected clinical data: The example of estimating cervical cancer risks from electronic health-records.

    PubMed

    Landy, Rebecca; Cheung, Li C; Schiffman, Mark; Gage, Julia C; Hyun, Noorie; Wentzensen, Nicolas; Kinney, Walter K; Castle, Philip E; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Sasieni, Peter D; Katki, Hormuzd A

    2018-06-01

    Electronic health-records (EHR) are increasingly used by epidemiologists studying disease following surveillance testing to provide evidence for screening intervals and referral guidelines. Although cost-effective, undiagnosed prevalent disease and interval censoring (in which asymptomatic disease is only observed at the time of testing) raise substantial analytic issues when estimating risk that cannot be addressed using Kaplan-Meier methods. Based on our experience analysing EHR from cervical cancer screening, we previously proposed the logistic-Weibull model to address these issues. Here we demonstrate how the choice of statistical method can impact risk estimates. We use observed data on 41,067 women in the cervical cancer screening program at Kaiser Permanente Northern California, 2003-2013, as well as simulations to evaluate the ability of different methods (Kaplan-Meier, Turnbull, Weibull and logistic-Weibull) to accurately estimate risk within a screening program. Cumulative risk estimates from the statistical methods varied considerably, with the largest differences occurring for prevalent disease risk when baseline disease ascertainment was random but incomplete. Kaplan-Meier underestimated risk at earlier times and overestimated risk at later times in the presence of interval censoring or undiagnosed prevalent disease. Turnbull performed well, though was inefficient and not smooth. The logistic-Weibull model performed well, except when event times didn't follow a Weibull distribution. We have demonstrated that methods for right-censored data, such as Kaplan-Meier, result in biased estimates of disease risks when applied to interval-censored data, such as screening programs using EHR data. The logistic-Weibull model is attractive, but the model fit must be checked against Turnbull non-parametric risk estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Modeling stream fish distributions using interval-censored detection times.

    PubMed

    Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro

    2016-08-01

    Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.

  7. Backdating of events in electronic primary health care data: should one censor at the date of last data collection.

    PubMed

    Sammon, Cormac J; Petersen, Irene

    2016-04-01

    Studies using primary care databases often censor follow-up at the date data are last collected from clinical computer systems (last collection date (LCD)). We explored whether this results in the selective exclusion of events entered in the electronic health records after their date of occurrence, that is, backdated events. We used data from The Health Improvement Network (THIN). Using two versions of the database, we identified events that were entered into a later (THIN14) but not an earlier version of the database (THIN13) and investigated how the number of entries changed as a function of time since LCD. Times between events and the dates they were recorded were plotted as a function of time since the LCD in an effort to determine appropriate points at which to censor follow-up. There were 356 million eligible events in THIN14 and 355 million eligible events in THIN13. When comparing the two data sets, the proportion of missing events in THIN13 was highest in the month prior to the LCD (9.6%), decreasing to 5.2% at 6 months and 3.4% at 12 months. The proportion of missing events was largest for events typically diagnosed in secondary care such as neoplasms (28% in the month prior to LCD) and negligible for events typically diagnosed in primary care such as respiratory events (2% in the month prior to LCD). Studies using primary care databases, particularly those investigating events typically diagnosed outside primary care, should censor follow-up prior to the LCD to avoid underestimation of event rates. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Accounting for Selection Bias in Studies of Acute Cardiac Events.

    PubMed

    Banack, Hailey R; Harper, Sam; Kaufman, Jay S

    2018-06-01

    In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  9. Simply Symmetric

    ERIC Educational Resources Information Center

    de Villiers, Michael

    2011-01-01

    Symmetry is found in the visual arts, architecture and design of artefacts since the earliest time. Many natural objects, both organic and inorganic, display symmetry: from microscopic crystals and sub-atomic particles to macro-cosmic galaxies. Today it features strongly in higher mathematics such as Linear and Abstract Algebra, Projective and…

  10. Cosmic strings and chronology protection

    NASA Astrophysics Data System (ADS)

    Grant, James D. E.

    1993-03-01

    A space consisting of two rapidly moving cosmic strings has recently been constructed by Gott that contains closed timelike curves. The global structure of this space is analyzed and it is found that, away from the strings, the space is identical to a generalized Misner space. The vacuum expectation value of the energy-momentum tensor for a conformally coupled scalar field is calculated on this generalized Misner space. It is found to diverge very weakly on the chronology horizon, but more strongly on the polarized hypersurfaces. The divergence on the polarized hypersurfaces is strong enough that when the proper geodesic interval around any polarized hypersurface is of the order of the Planck length squared, the perturbation to the metric caused by the back reaction will be of the order one. Thus we expect the structure of the space will be radically altered by the back reaction before quantum gravitational effects become important. This suggests that Hawking's ``chronology protection conjecture'' holds for spaces with a noncompactly generated chronology horizon.

  11. Confinement and diffusion time-scales of CR hadrons in AGN-inflated bubbles

    NASA Astrophysics Data System (ADS)

    Prokhorov, D. A.; Churazov, E. M.

    2017-09-01

    While rich clusters are powerful sources of X-rays, γ-ray emission from these large cosmic structures has not been detected yet. X-ray radiative energy losses in the central regions of relaxed galaxy clusters are so strong that one needs to consider special sources of energy, likely active galactic nucleus (AGN) feedback, to suppress catastrophic cooling of the gas. We consider a model of AGN feedback that postulates that the AGN supplies the energy to the gas by inflating bubbles of relativistic plasma, whose energy content is dominated by cosmic-ray (CR) hadrons. If most of these hadrons can quickly escape the bubbles, then collisions of CRs with thermal protons in the intracluster medium (ICM) should lead to strong γ-ray emission, unless fast diffusion of CRs removes them from the cluster. Therefore, the lack of detections with modern γ-ray telescopes sets limits on the confinement time of CR hadrons in bubbles and CR diffusive propagation in the ICM.

  12. Cosmological constraints on Brans-Dicke theory.

    PubMed

    Avilez, A; Skordis, C

    2014-07-04

    We report strong cosmological constraints on the Brans-Dicke (BD) theory of gravity using cosmic microwave background data from Planck. We consider two types of models. First, the initial condition of the scalar field is fixed to give the same effective gravitational strength Geff today as the one measured on Earth, GN. In this case, the BD parameter ω is constrained to ω>692 at the 99% confidence level, an order of magnitude improvement over previous constraints. In the second type, the initial condition for the scalar is a free parameter leading to a somewhat stronger constraint of ω>890, while Geff is constrained to 0.981

  13. Quasinormal Modes and Strong Cosmic Censorship.

    PubMed

    Cardoso, Vitor; Costa, João L; Destounis, Kyriakos; Hintz, Peter; Jansen, Aron

    2018-01-19

    The fate of Cauchy horizons, such as those found inside charged black holes, is intrinsically connected to the decay of small perturbations exterior to the event horizon. As such, the validity of the strong cosmic censorship (SCC) conjecture is tied to how effectively the exterior damps fluctuations. Here, we study massless scalar fields in the exterior of Reissner-Nordström-de Sitter black holes. Their decay rates are governed by quasinormal modes of the black hole. We identify three families of modes in these spacetimes: one directly linked to the photon sphere, well described by standard WKB-type tools; another family whose existence and time scale is closely related to the de Sitter horizon; finally, a third family which dominates for near-extremally charged black holes and which is also present in asymptotically flat spacetimes. The last two families of modes seem to have gone unnoticed in the literature. We give a detailed description of linear scalar perturbations of such black holes, and conjecture that SCC is violated in the near extremal regime.

  14. Quasinormal Modes and Strong Cosmic Censorship

    NASA Astrophysics Data System (ADS)

    Cardoso, Vitor; Costa, João L.; Destounis, Kyriakos; Hintz, Peter; Jansen, Aron

    2018-01-01

    The fate of Cauchy horizons, such as those found inside charged black holes, is intrinsically connected to the decay of small perturbations exterior to the event horizon. As such, the validity of the strong cosmic censorship (SCC) conjecture is tied to how effectively the exterior damps fluctuations. Here, we study massless scalar fields in the exterior of Reissner-Nordström-de Sitter black holes. Their decay rates are governed by quasinormal modes of the black hole. We identify three families of modes in these spacetimes: one directly linked to the photon sphere, well described by standard WKB-type tools; another family whose existence and time scale is closely related to the de Sitter horizon; finally, a third family which dominates for near-extremally charged black holes and which is also present in asymptotically flat spacetimes. The last two families of modes seem to have gone unnoticed in the literature. We give a detailed description of linear scalar perturbations of such black holes, and conjecture that SCC is violated in the near extremal regime.

  15. The radiation environment on the Moon from galactic cosmic rays in a lunar habitat.

    PubMed

    Jia, Y; Lin, Z W

    2010-02-01

    We calculated how the radiation environment in a habitat on the surface of the Moon would have depended on the thickness of the habitat in the 1977 galactic cosmic-ray environment. The Geant4 Monte Carlo transport code was used, and a hemispherical dome made of lunar regolith was used to simulate the lunar habitat. We investigated the effective dose from primary and secondary particles including nuclei from protons up to nickel, neutrons, charged pions, photons, electrons and positrons. The total effective dose showed a strong decrease with the thickness of the habitat dome. However, the effective dose values from secondary neutrons, charged pions, photons, electrons and positrons all showed a strong increase followed by a gradual decrease with the habitat thickness. The fraction of the summed effective dose from these secondary particles in the total effective dose increased with the habitat thickness, from approximately 5% for the no-habitat case to about 47% for the habitat with an areal thickness of 100 g/cm(2).

  16. A search for ultrahigh-energy neutrinos and measurement of cosmic ray radio emission with the Antarctic Impulsive Transient Antenna

    NASA Astrophysics Data System (ADS)

    Hoover, Stephen Lam Douglas

    2010-11-01

    New astronomical messengers may reveal unexpected aspects of the Universe and have often provided a unique source of fresh physical insights. Neutrinos are a promising new messenger particle, capable of carrying information from otherwise inaccessible sources. The ANtarctic Impulsive Transient Antenna (ANITA) seeks to make the first detection of an ultrahigh-energy (E > 1018 eV) neutrino flux. Such a neutrino flux almost certainly exists, produced in interactions of ultrahigh-energy cosmic rays with photons from the cosmic microwave background. ANITA is a balloon payload which monitors large volumes of the Antarctic ice sheet from an altitude of 38 km. An ultrahigh-energy neutrino which interacts in the ice sheet will produce a particle shower which will coherently radiate Cherenkov radiation in radio wavelengths (<3 GHz). Antennas on the balloon payload can then detect the resulting impulsive radio signal. The full ANITA flew for the first time from 15 December 2006 to 19 January 2007. In this dissertation, I will describe the ground calibration system used to transmit calibration signals to the payload in-flight. I will then describe techniques for analysis of ANITA data and give limits on the ultrahigh-energy neutrino flux implied by the null result of that analysis. Finally, I will demonstrate that ANITA is also sensitive to ultrahigh-energy cosmic rays and show the detection of 16 ultrahigh-energy cosmic-ray events during ANITA's first flight. This constitutes the highest frequency and widest bandwidth radio observations of cosmic-ray emission to date I show the average waveform and spectrum of these events and describe their polarization properties, which are strongly correlated with the geomagnetic field.

  17. Model-independent Constraints on Cosmic Curvature and Opacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Guo-Jian; Li, Zheng-Xiang; Xia, Jun-Qing

    2017-09-20

    In this paper, we propose to estimate the spatial curvature of the universe and the cosmic opacity in a model-independent way with expansion rate measurements, H ( z ), and type Ia supernova (SNe Ia). On the one hand, using a nonparametric smoothing method Gaussian process, we reconstruct a function H ( z ) from opacity-free expansion rate measurements. Then, we integrate the H ( z ) to obtain distance modulus μ {sub H}, which is dependent on the cosmic curvature. On the other hand, distances of SNe Ia can be determined by their photometric observations and thus are opacity-dependent.more » In our analysis, by confronting distance moduli μ {sub H} with those obtained from SNe Ia, we achieve estimations for both the spatial curvature and the cosmic opacity without any assumptions for the cosmological model. Here, it should be noted that light curve fitting parameters, accounting for the distance estimation of SNe Ia, are determined in a global fit together with the cosmic opacity and spatial curvature to get rid of the dependence of these parameters on cosmology. In addition, we also investigate whether the inclusion of different priors for the present expansion rate ( H {sub 0}: global estimation, 67.74 ± 0.46 km s{sup −1} Mpc{sup −1}, and local measurement, 73.24 ± 1.74 km s{sup −1} Mpc{sup −1}) exert influence on the reconstructed H ( z ) and the following estimations of the spatial curvature and cosmic opacity. Results show that, in general, a spatially flat and transparent universe is preferred by the observations. Moreover, it is suggested that priors for H {sub 0} matter a lot. Finally, we find that there is a strong degeneracy between the curvature and the opacity.« less

  18. Cold, warm, and composite (cool) cosmic string models

    NASA Astrophysics Data System (ADS)

    Carter, B.

    1994-01-01

    The dynamical behaviour of a cosmic string is strongly affected by any reduction of the effective string tension T below the constant value, T = m2 say, that typifies a simple, longitudinally Lorentz invariant Goto-Nambu type string model, where m is a fixed mass scale determined by the internal structure of an underlying Nielsen-Olesen type vacuum vortex. Such a reduction of tension occurs in the standard ``warm'' cosmic string model in which the effect of thermal perturbations of a simple Goto-Nambu model is represented by an effective tension T given in terms of the corresponding effective temperature, Θ say, by T2 = m2(m2 - 1/3πΘ2). A qualitatively similar though analytically more complicated tension reduction phenomenon occurs in ``cold'' conducting cosmic string models of the kind whose existence was first proposed by Witten, where the role of the temperature is played by an effective mass or chemical potential μ that is constructed as the scalar magnitude of the energy momentum covector obtained as the gradient of the phase ϕ of a bosonic condensate in the core of the vacuum vortex. The present article describes the construction and essential mechanical properties of a new category of composite ``cool'' cosmic string models that are intermediate between these ``warm'' and ``cold'' limit cases. These composite models are the string analogues of the standard Landau model for a two-constituent finite temperature superfluid, and as such involve two independent currents interpretable as that of the entropy on the one hand and that of the bosonic condensate on the other. It is surmised that the stationary (in particular ring) equilibrium states of such ``cool'' cosmic strings may be of cosmologicl significance.

  19. Heliospheric Impact on Cosmic Rays Modulation

    NASA Astrophysics Data System (ADS)

    Tiwari, Bhupendra Kumar

    2016-07-01

    Heliospheric Impact on Cosmic RaysModulation B. K. Tiwari Department of Physics, A. P. S. University, Rewa (M.P.), btiwari70@yahoo.com Cosmic rays (CRs) flux at earth is modulated by the heliosphereric magnetic field and the structure of the heliosphere, controls by solar outputs and their variability. Sunspots numbers (SSN) is often treated as a primary indicator of solar activity (SA). GCRs entering the helioshphere are affected by the interplanetary magnetic field (IMF) and solar wind speed, their modulation varies with the varying solar activity. The observation based on data recoded from Omniweb data Centre for solar- interplanetary activity indices and monthly mean count rate of cosmic ray intensity (CRI) data from neutron monitors of different cut-off rigidities(Rc) (Moscow Rc=2.42Gv and Oulu Rc=0.80Gv). During minimum solar activity periodof solar cycle 23/24, the sun is remarkably quiet, weakest strength of the IMF and least dense and slowest, solar wind speed, whereas, in 2003, highest value of yearly averaged solar wind speed (~568 Km/sec) associated with several coronal holes, which generate high speed wind stream has been recorded. It is observed that GCRs fluxes reduces and is high anti-correlated with SSN (0.80) and IMF (0.86). CRI modulation produces by a strong solar flare, however, CME associated solar flare produce more disturbance in the interplanetary medium as well as in geomagnetic field. It is found that count rate of cosmic ray intensity and solar- interplanetary parameters were inverse correlated and solar indices were positive correlated. Keywords- Galactic Cosmic rays (GCRs), Sunspot number (SSN), Solar activity (SA), Coronal Mass Ejection (CME), Interplanetary magnetic field (IMF)

  20. Direct observations of galactic cosmic rays

    NASA Astrophysics Data System (ADS)

    Müller, Dietrich

    2012-08-01

    The mysterious " radiation ... entering our atmosphere from above" discovered by Hess in 1912 is now known to be dominated by relativistic charged particles, mostly with energies in the GeV-range, but extending to energies higher by many orders of magnitude. As none of these particles can penetrate the earth's atmosphere without interaction, detailed studies of their composition and energy spectra require observations with high-altitude balloons or spacecraft. This became possible only towards the middle of the 20th century. The direct measurements have now revealed much detail about the Galactic cosmic rays below 1015eV, but do not yet provide much overlap with the air-shower region of energies. A historic overview of the measurements is given, beginning with the realization that the majority of the cosmic rays are protons. The discovery and astrophysical significance of the heavier nuclei, and of the ultra-heavy nuclei beyond iron and up to the actinides, are then described, and measurements of the isotopic composition are discussed. Observations of the individual energy spectra are reviewed, and finally, the detection of electrons, positrons, and anti-protons in the cosmic rays, and the searches for exotic or unusual phenomena are summarized. Emphasis is given to the fact that all of these discoveries have become possible through the evolution of increasingly sophisticated detection techniques, a process that is continuing through the present time. The precise knowledge of the abundance distributions of the elements in the cosmic rays and of their isotopic composition permits a comparison with the "universal abundance scale" and provides strong constraints on the origin of the cosmic-ray material in the interstellar medium. "Clock-isotopes" reveal the time history of the particles. The shapes of the energy spectra of the individual cosmic-ray components are related to evolving ideas about particle acceleration and propagation in the Galaxy. In conclusion, prospects for future work are briefly discussed.

  1. Mass composition studies of Ultra High Energy cosmic rays through the measurement of the Muon Production Depths at the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collica, Laura

    The Pierre Auger Observatory (Auger) in Argentina studies Ultra High Energy Cosmic Rays (UHECRs) physics. The flux of cosmic rays at these energies (above 1018 eV) is very low (less than 100 particle/km2-year) and UHECR properties must be inferred from the measurements of the secondary particles that the cosmic ray primary produces in the atmosphere. These particles cascades are called Extensive Air Showers (EAS) and can be studied at ground by deploying detectors covering large areas. The EAS physics is complex, and the properties of secondary particles depend strongly on the first interaction, which takes place at an energy beyondmore » the ones reached at accelerators. As a consequence, the analysis of UHECRs is subject to large uncertainties and hence many of their properties, in particular their composition, are still unclear. Two complementary techniques are used at Auger to detect EAS initiated by UHE- CRs: a 3000 km2 surface detector (SD) array of water Cherenkov tanks which samples particles at ground level and fluorescence detectors (FD) which collect the ultraviolet light emitted by the de-excitation of nitrogen nuclei in the atmosphere, and can operate only in clear, moonless nights. Auger is the largest cosmic rays detector ever built and it provides high-quality data together with unprecedented statistics. The main goal of this thesis is the measurement of UHECR mass composition using data from the SD of the Pierre Auger Observatory. Measuring the cosmic ray composition at the highest energies is of fundamental importance from the astrophysical point of view, since it could discriminate between different scenarios of origin and propagation of cosmic rays. Moreover, mass composition studies are of utmost importance for particle physics. As a matter of fact, knowing the composition helps in exploring the hadronic interactions at ultra-high energies, inaccessible to present accelerator experiments.« less

  2. A strategy to unveil transient sources of ultra-high-energy cosmic rays

    NASA Astrophysics Data System (ADS)

    Takami, Hajime

    2013-06-01

    Transient generation of ultra-high-energy cosmic rays (UHECRs) has been motivated from promising candidates of UHECR sources such as gamma-ray bursts, flares of active galactic nuclei, and newly born neutron stars and magnetars. Here we propose a strategy to unveil transient sources of UHECRs from UHECR experiments. We demonstrate that the rate of UHECR bursts and/or flares is related to the apparent number density of UHECR sources, which is the number density estimated on the assumption of steady sources, and the time-profile spread of the bursts produced by cosmic magnetic fields. The apparent number density strongly depends on UHECR energies under a given rate of the bursts, which becomes observational evidence of transient sources. It is saturated at the number density of host galaxies of UHECR sources. We also derive constraints on the UHECR burst rate and/or energy budget of UHECRs per source as a function of the apparent source number density by using models of cosmic magnetic fields. In order to obtain a precise constraint of the UHECR burst rate, high event statistics above ˜ 1020 eV for evaluating the apparent source number density at the highest energies and better knowledge on cosmic magnetic fields by future observations and/or simulations to better estimate the time-profile spread of UHECR bursts are required. The estimated rate allows us to constrain transient UHECR sources by being compared with the occurrence rates of known energetic transient phenomena.

  3. On the maximum energy of shock-accelerated cosmic rays at ultra-relativistic shocks

    NASA Astrophysics Data System (ADS)

    Reville, B.; Bell, A. R.

    2014-04-01

    The maximum energy to which cosmic rays can be accelerated at weakly magnetised ultra-relativistic shocks is investigated. We demonstrate that for such shocks, in which the scattering of energetic particles is mediated exclusively by ion skin-depth scale structures, as might be expected for a Weibel-mediated shock, there is an intrinsic limit on the maximum energy to which particles can be accelerated. This maximum energy is determined from the requirement that particles must be isotropized in the downstream plasma frame before the mean field transports them far downstream, and falls considerably short of what is required to produce ultra-high-energy cosmic rays. To circumvent this limit, a highly disorganized field is required on larger scales. The growth of cosmic ray-induced instabilities on wavelengths much longer than the ion-plasma skin depth, both upstream and downstream of the shock, is considered. While these instabilities may play an important role in magnetic field amplification at relativistic shocks, on scales comparable to the gyroradius of the most energetic particles, the calculated growth rates have insufficient time to modify the scattering. Since strong modification is a necessary condition for particles in the downstream region to re-cross the shock, in the absence of an alternative scattering mechanism, these results imply that acceleration to higher energies is ruled out. If weakly magnetized ultra-relativistic shocks are disfavoured as high-energy particle accelerators in general, the search for potential sources of ultra-high-energy cosmic rays can be narrowed.

  4. A method for establishing constraints on galactic magnetic field models using ultra high energy cosmic rays and results from the data of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Sutherland, Michael Stephen

    2010-12-01

    The Galactic magnetic field is poorly understood. Essentially the only reliable measurements of its properties are the local orientation and field strength. Its behavior at galactic scales is unknown. Historically, magnetic field measurements have been performed using radio astronomy techniques which are sensitive to certain regions of the Galaxy and rely upon models of the distribution of gas and dust within the disk. However, the deflection of trajectories of ultra high energy cosmic rays arriving from extragalactic sources depends only on the properties of the magnetic field. In this work, a method is developed for determining acceptable global models of the Galactic magnetic field by backtracking cosmic rays through the field model. This method constrains the parameter space of magnetic field models by comparing a test statistic between backtracked cosmic rays and isotropic expectations for assumed cosmic ray source and composition hypotheses. Constraints on Galactic magnetic field models are established using data from the southern site of the Pierre Auger Observatory under various source distribution and cosmic ray composition hypotheses. Field models possessing structure similar to the stellar spiral arms are found to be inconsistent with hypotheses of an iron cosmic ray composition and sources selected from catalogs tracing the local matter distribution in the universe. These field models are consistent with hypothesis combinations of proton composition and sources tracing the local matter distribution. In particular, strong constraints are found on the parameter space of bisymmetric magnetic field models scanned under hypotheses of proton composition and sources selected from the 2MRS-VS, Swift 39-month, and VCV catalogs. Assuming that the Galactic magnetic field is well-described by a bisymmetric model under these hypotheses, the magnetic field strength near the Sun is less than 3-4 muG and magnetic pitch angle is less than -8°. These results comprise the first measurements of the Galactic magnetic field using ultra-high energy cosmic rays and supplement existing radio astronomical measurements of the Galactic magnetic field.

  5. A Bitter Pill: The Cosmic Lithium Problem

    NASA Astrophysics Data System (ADS)

    Fields, Brian

    2014-03-01

    Primordial nucleosynthesis describes the production of the lightest nuclides in the first three minutes of cosmic time. We will discuss the transformative influence of the WMAP and Planck determinations of the cosmic baryon density. Coupled with nucleosynthesis theory, these measurements make tight predictions for the primordial light element abundances: deuterium observations agree spectacularly with these predictions, helium observations are in good agreement, but lithium observations (in ancient halo stars) are significantly discrepant-this is the ``lithium problem.'' Over the past decade, the lithium discrepancy has become more severe, and very recently the solution space has shrunk. A solution due to new nuclear resonances has now been essentially ruled out experimentally. Stellar evolution solutions remain viable but must be finely tuned. Observational systematics are now being probed by qualitatively new methods of lithium observation. Finally, new physics solutions are now strongly constrained by the combination of the precision baryon determination by Planck, and the need to match the D/H abundances now measured to unprecedented precision at high redshift. Supported in part by NSF grant PHY-1214082.

  6. Cosmic ray electrons, positrons and the synchrotron emission of the Galaxy: consistent analysis and implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Giuseppe Di; Evoli, Carmelo; Gaggero, Daniele

    2013-03-01

    A multichannel analysis of cosmic ray electron and positron spectra and of the diffuse synchrotron emission of the Galaxy is performed by using the DRAGON code. This study is aimed at probing the interstellar electron source spectrum down to E ∼< 1GeV and at constraining several propagation parameters. We find that above 4GeV the e{sup −} source spectrum is compatible with a power-law of index ∼ 2.5. Below 4GeV instead it must be significantly suppressed and the total lepton spectrum is dominated by secondary particles. The positron spectrum and fraction measured below a few GeV are consistently reproduced only withinmore » low reacceleration models. We also constrain the scale-height z{sub t} of the cosmic-ray distribution using three independent (and, in two cases, original) arguments, showing that values of z{sub t} ∼< 2kpc are excluded. This result may have strong implications for particle dark matter searches.« less

  7. Measurements of the cosmic background radiation

    NASA Technical Reports Server (NTRS)

    Lubin, P.; Villela, T.

    1987-01-01

    Maps of the large scale structure (theta is greater than 6 deg) of the cosmic background radiation covering 90 percent of the sky are now available. The data show a very strong 50-100 sigma (statistical error) dipole component, interpreted as being due to our motion, with a direction of alpha = 11.5 + or - 0.15 hours, sigma = -5.6 + or - 2.0 deg. The inferred direction of the velocity of our galaxy relative to the cosmic background radiation is alpha = 10.6 + or - 0.3 hours, sigma = -2.3 + or - 5 deg. This is 44 deg from the center of the Virgo cluster. After removing the dipole component, the data show a galactic signature but no apparent residual structure. An autocorrelation of the residual data, after substraction of the galactic component from a combined Berkeley (3 mm) and Princeton (12 mm) data sets, show no apparent structure from 10 to 180 deg with a rms of 0.01 mK(sup 2). At 90 percent confidence level limit of .00007 is placed on a quadrupole component.

  8. Background observations on the SMM high energy monitor at energies greater than 10 MeV

    NASA Technical Reports Server (NTRS)

    Forrest, D. J.

    1989-01-01

    The background rate in any gamma ray detector on a spacecraft in near-earth orbit is strongly influenced by the primary cosmic ray flux at the spacecraft's position. Although the direct counting of the primary cosmic rays can be rejected by anticoincident shields, secondary production cannot be. Secondary production of gamma rays and neutrons in the instrument, the spacecraft, and the earth's atmospheric are recorded as background. A 30 day data base of 65.5 second records has been used to show that some of the background rates observed on the Gamma Ray Spectrometer can be ordered to a precision on the order of 1 percent This ordering is done with only two parameters, namely the cosmic ray vertical cutoff rigidity and the instrument's pointing angle with respect to the earth's center. This result sets limits on any instrumental instability and also on any temporal or spatial changes in the background radiation field.

  9. Markov chains and semi-Markov models in time-to-event analysis.

    PubMed

    Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J

    2013-10-25

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.

  10. Markov chains and semi-Markov models in time-to-event analysis

    PubMed Central

    Abner, Erin L.; Charnigo, Richard J.; Kryscio, Richard J.

    2014-01-01

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields. PMID:24818062

  11. Photon counting, censor corrections, and lifetime imaging for improved detection in two-photon microscopy

    PubMed Central

    Driscoll, Jonathan D.; Shih, Andy Y.; Iyengar, Satish; Field, Jeffrey J.; White, G. Allen; Squier, Jeffrey A.; Cauwenberghs, Gert

    2011-01-01

    We present a high-speed photon counter for use with two-photon microscopy. Counting pulses of photocurrent, as opposed to analog integration, maximizes the signal-to-noise ratio so long as the uncertainty in the count does not exceed the gain-noise of the photodetector. Our system extends this improvement through an estimate of the count that corrects for the censored period after detection of an emission event. The same system can be rapidly reconfigured in software for fluorescence lifetime imaging, which we illustrate by distinguishing between two spectrally similar fluorophores in an in vivo model of microstroke. PMID:21471395

  12. Multiple imputation for cure rate quantile regression with censored data.

    PubMed

    Wu, Yuanshan; Yin, Guosheng

    2017-03-01

    The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.

  13. Multivariate-$t$ nonlinear mixed models with application to censored multi-outcome AIDS studies.

    PubMed

    Lin, Tsung-I; Wang, Wan-Lun

    2017-10-01

    In multivariate longitudinal HIV/AIDS studies, multi-outcome repeated measures on each patient over time may contain outliers, and the viral loads are often subject to a upper or lower limit of detection depending on the quantification assays. In this article, we consider an extension of the multivariate nonlinear mixed-effects model by adopting a joint multivariate-$t$ distribution for random effects and within-subject errors and taking the censoring information of multiple responses into account. The proposed model is called the multivariate-$t$ nonlinear mixed-effects model with censored responses (MtNLMMC), allowing for analyzing multi-outcome longitudinal data exhibiting nonlinear growth patterns with censorship and fat-tailed behavior. Utilizing the Taylor-series linearization method, a pseudo-data version of expectation conditional maximization either (ECME) algorithm is developed for iteratively carrying out maximum likelihood estimation. We illustrate our techniques with two data examples from HIV/AIDS studies. Experimental results signify that the MtNLMMC performs favorably compared to its Gaussian analogue and some existing approaches. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Thermal Cycling Life Prediction of Sn-3.0Ag-0.5Cu Solder Joint Using Type-I Censored Data

    PubMed Central

    Mi, Jinhua; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-01-01

    Because solder joint interconnections are the weaknesses of microelectronic packaging, their reliability has great influence on the reliability of the entire packaging structure. Based on an accelerated life test the reliability assessment and life prediction of lead-free solder joints using Weibull distribution are investigated. The type-I interval censored lifetime data were collected from a thermal cycling test, which was implemented on microelectronic packaging with lead-free ball grid array (BGA) and fine-pitch ball grid array (FBGA) interconnection structures. The number of cycles to failure of lead-free solder joints is predicted by using a modified Engelmaier fatigue life model and a type-I censored data processing method. Then, the Pan model is employed to calculate the acceleration factor of this test. A comparison of life predictions between the proposed method and the ones calculated directly by Matlab and Minitab is conducted to demonstrate the practicability and effectiveness of the proposed method. At last, failure analysis and microstructure evolution of lead-free solders are carried out to provide useful guidance for the regular maintenance, replacement of substructure, and subsequent processing of electronic products. PMID:25121138

  15. The fracture load and failure types of veneered anterior zirconia crowns: an analysis of normal and Weibull distribution of complete and censored data.

    PubMed

    Stawarczyk, Bogna; Ozcan, Mutlu; Hämmerle, Christoph H F; Roos, Malgorzata

    2012-05-01

    The aim of this study was to compare the fracture load of veneered anterior zirconia crowns using normal and Weibull distribution of complete and censored data. Standardized zirconia frameworks for maxillary canines were milled using a CAD/CAM system and randomly divided into 3 groups (N=90, n=30 per group). They were veneered with three veneering ceramics, namely GC Initial ZR, Vita VM9, IPS e.max Ceram using layering technique. The crowns were cemented with glass ionomer cement on metal abutments. The specimens were then loaded to fracture (1 mm/min) in a Universal Testing Machine. The data were analyzed using classical method (normal data distribution (μ, σ); Levene test and one-way ANOVA) and according to the Weibull statistics (s, m). In addition, fracture load results were analyzed depending on complete and censored failure types (only chipping vs. total fracture together with chipping). When computed with complete data, significantly higher mean fracture loads (N) were observed for GC Initial ZR (μ=978, σ=157; s=1043, m=7.2) and VITA VM9 (μ=1074, σ=179; s=1139; m=7.8) than that of IPS e.max Ceram (μ=798, σ=174; s=859, m=5.8) (p<0.05) by classical and Weibull statistics, respectively. When the data were censored for only total fracture, IPS e.max Ceram presented the lowest fracture load for chipping with both classical distribution (μ=790, σ=160) and Weibull statistics (s=836, m=6.5). When total fracture with chipping (classical distribution) was considered as failure, IPS e.max Ceram did not show significant fracture load for total fracture (μ=1054, σ=110) compared to other groups (GC Initial ZR: μ=1039, σ=152, VITA VM9: μ=1170, σ=166). According to Weibull distributed data, VITA VM9 showed significantly higher fracture load (s=1228, m=9.4) than those of other groups. Both classical distribution and Weibull statistics for complete data yielded similar outcomes. Censored data analysis of all ceramic systems based on failure types is essential and brings additional information regarding the susceptibility to chipping or total fracture. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  16. Emulsion chamber observations and interpretation (HE 3)

    NASA Technical Reports Server (NTRS)

    Shibata, M.

    1986-01-01

    Experimental results from Emulsion Chamber (EC) experiments at mountain altitudes or at higher levels using flying carriers are examined. The physical interest in this field is concentrated on the strong interaction at the very high energy region exceeding the accelerator energy, also on the primary cosmic ray intensity and its chemical composition. Those experiments which observed cosmic ray secondaries gave information on high energy interaction characteristics through the analyses of secondary spectra, gamma-hadron families and C-jets (direct observation of the particle production occuring at the carbon target). Problems of scaling violation in fragmentation region, interaction cross section, transverse momentum of produced secondaries, and some peculiar features of exotic events are discussed.

  17. Observation of pick-up ions in the solar wind: Evidence for the source of the anomalous cosmic ray component?

    NASA Technical Reports Server (NTRS)

    Hovestadt, D.; Moebius, E.; Klecker, B.; Scholer, M.; Gloeckler, G.; Ipavich, F. M.

    1985-01-01

    Singly ionized energetic helium has been observed in the solar wind by using the time of flight spectrometer SULEICA on the AMPTE/IRM satellite between September and December, 1984. The energy density spectrum shows a sharp cut off which is strongly correlated with the four fold solar wind bulk energy. The absolute flux of the He(+)ions of about 10000 ion/sq cm.s is present independent of the IPL magnetic field orientation. The most likely source is the neutral helium of the interstellar wind which is ionized by solar UV radiation. It is suggested that these particles represent the source of the anomalous cosmic ray component.

  18. Starobinsky-like inflation, supercosmology and neutrino masses in no-scale flipped SU(5)

    NASA Astrophysics Data System (ADS)

    Ellis, John; Garcia, Marcos A. G.; Nagata, Natsumi; Nanopoulos, Dimitri V.; Olive, Keith A.

    2017-07-01

    We embed a flipped SU(5) × U(1) GUT model in a no-scale supergravity framework, and discuss its predictions for cosmic microwave background observables, which are similar to those of the Starobinsky model of inflation. Measurements of the tilt in the spectrum of scalar perturbations in the cosmic microwave background, ns, constrain significantly the model parameters. We also discuss the model's predictions for neutrino masses, and pay particular attention to the behaviours of scalar fields during and after inflation, reheating and the GUT phase transition. We argue in favor of strong reheating in order to avoid excessive entropy production which could dilute the generated baryon asymmetry.

  19. Scientific Goals and Objectives of the Probe of Inflation and Cosmic Origins

    NASA Astrophysics Data System (ADS)

    Wen, Qi; Hanany, Shaul; Young, Karl S.; PICO Team

    2018-01-01

    The Probe of Inflation and Cosmic Origins (PICO) is a space mission concept that is being studied in preparation for the 2020 Astronomy and Astrophysics Decadal Survey. PICO will conduct a polarimetric full sky survey in 21 frequency bands between 20 and 800 GHz with 70 times the sensitivity of the Planck satellite. Using the data from 8 redundant full sky surveys PICO will detect or place new limits on the energy scale of inflation and the physics of quantum gravity; it will determine the effective number of light degrees of freedom in the early universe and the sum of neutrino masses; it will measure the optical depth to reionization up to cosmic variance limits; it will provide a full sky catalog of thousands of strongly lensed high-z infrared sources, of proto clusters, and of low-z low-mass galaxies extending our understanding of structure formation to populations not yet observed; it will find tens of thousands of new clusters across cosmic time, information that will further constrain cosmological parameters; and it will make sensitive maps of the galactic magnetic field, which will clarify its role in the process of star formation.We present an overview of the mission’s scientific goals, its design, and the current status of the study.

  20. Cosmology of a Friedmann-Lamaître-Robertson-Walker 3-brane, late-time cosmic acceleration, and the cosmic coincidence.

    PubMed

    Doolin, Ciaran; Neupane, Ishwaree P

    2013-04-05

    A late epoch cosmic acceleration may be naturally entangled with cosmic coincidence--the observation that at the onset of acceleration the vacuum energy density fraction nearly coincides with the matter density fraction. In this Letter we show that this is indeed the case with the cosmology of a Friedmann-Lamaître-Robertson-Walker (FLRW) 3-brane in a five-dimensional anti-de Sitter spacetime. We derive the four-dimensional effective action on a FLRW 3-brane, from which we obtain a mass-reduction formula, namely, M(P)(2) = ρ(b)/|Λ(5)|, where M(P) is the effective (normalized) Planck mass, Λ(5) is the five-dimensional cosmological constant, and ρ(b) is the sum of the 3-brane tension V and the matter density ρ. Although the range of variation in ρ(b) is strongly constrained, the big bang nucleosynthesis bound on the time variation of the effective Newton constant G(N) = (8πM(P)(2))(-1) is satisfied when the ratio V/ρ ≳ O(10(2)) on cosmological scales. The same bound leads to an effective equation of state close to -1 at late epochs in accordance with astrophysical and cosmological observations.

  1. Strong interactions in air showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietrich, Dennis D.; Institut für Theoretische Physik, Goethe-Universität, Max-von-Laue-Straße, Frankfurt am Main

    2015-03-02

    We study the role new gauge interactions in extensions of the standard model play in air showers initiated by ultrahigh-energy cosmic rays. Hadron-hadron events remain dominated by quantum chromodynamics, while projectiles and/or targets from beyond the standard model permit us to see qualitative differences arising due to the new interactions.

  2. The Constellation-X Mission: Science Prospects and Technology Challenges

    NASA Technical Reports Server (NTRS)

    Petre, Robert

    2007-01-01

    This talk will describe the Constellation-X mission. It will present the key scientific goals, relating to strong gravity, dark energy, ultra-dense matter and cosmic structure. The mission configuration will be described. Emphasis will be placed on the design and anticipated implementation of the X-ray mirror system.

  3. Modelling Gravitational Radiation from Binary Black Holes

    NASA Technical Reports Server (NTRS)

    Centrella, Joan

    2006-01-01

    The final merger and coalescence of binary black holes is a key source of strong gravitational waves for the LISA mission. Observing these systems will allow us to probe the formation of cosmic structure to high redshifts and test general relativity directly in the strong-field, dynamical regime. Recently, major breakthroughs have been made in modeling black hole mergers using numerical relativity. This talk will survey these exciting developments, focusing on the gravitational waveforms and the recoil kicks produced from non-equal mass mergers.

  4. Hazard Function Estimation with Cause-of-Death Data Missing at Random.

    PubMed

    Wang, Qihua; Dinse, Gregg E; Liu, Chunling

    2012-04-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.

  5. Application of AFINCH as a tool for evaluating the effects of streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the southeast Lake Michigan hydrologic subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations.  Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971–2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages).Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size and composition of the streamflow-gaging network affected the average apparent errors and variability of the estimated flows and (b) whether results for certain months were more variable than for others. The six flow lines were categorized into one of three types depending upon their network topology and position relative to operating streamflow-gaging stations.    Statistical analysis of the model results indicates that (1) less precise (that is, more variable) estimates resulted from smaller streamflow-gaging networks as compared to larger streamflow-gaging networks, (2) precision of AFINCH flow estimates at an ungaged flow line is improved by operation of one or more streamflow gages upstream and (or) downstream in the enclosing basin, (3) no consistent seasonal trend in estimate variability was evident, and (4) flow lines from ungaged basins appeared to exhibit the smallest absolute apparent percent errors (APEs) and smallest changes in average APE as a function of increasing censoring level. The counterintuitive results described in item (4) above likely reflect both the nature of the base-streamflow estimate from which the errors were computed and insensitivity in the average model-derived estimates to changes in the streamflow-gaging-network size and composition. Another analysis demonstrated that errors for flow lines in ungaged basins have the potential to be much larger than indicated by their APEs if measured relative to their true (but unknown) flows.     “Missing gage” analyses, based on examination of censoring subset results where the streamflow gage of interest was omitted from the calibration data set, were done to better understand the true error characteristics for ungaged flow lines as a function of network size. Results examined for 2 water years indicated that the probability of computing a monthly streamflow estimate within 10 percent of the true value with AFINCH decreased from greater than 0.9 at about a 10-percent network-censoring level to less than 0.6 as the censoring level approached 75 percent. In addition, estimates for typically dry months tended to be characterized by larger percent errors than typically wetter months.

  6. A Synthesis Of Cosmic X-ray And Infrared Background

    NASA Astrophysics Data System (ADS)

    Shi, Yong; Helou, G.; Armus, L.; Stierwalt, S.

    2012-01-01

    We present a synthesis model of cosmic IR and X-ray background, with the goal to derive a complete census of cosmic evolution of star formation (SF) and black-hole (BH) growth by complementing advantages of X-ray and IR surveys to each other. By assuming that individual galaxies are experiencing both SF and BH accretion, our model decomposes the total IR LF into SF and BH components while taking into account the luminosity-dependent SED and its dispersion of the SF component, and the extinction-dependent SED of the BH component. The best-fit parameters are derived by fitting to the number counts and redshift distributions at X-ray including both hard and soft bands, and mid-IR to submm bands including IRAS, Spitzer, Herschel, SCUBA, Aztec and MAMBO. Based on the fit result, our models provide a series of predictions on galaxy evolution and black-hole growth. For evolution of infrared galaxies, the model predicts that the total infrared luminosity function is best described through evolution in both luminosity and density. For evolution of AGN populations, the model predicts that the evolution of X-ray LF also shows luminosity and density dependent, that the type-1/type-2 AGN fraction is a function of both luminosity and redshift, and that the Compton-thick AGN number density evolves strongly with redshift, contributing about 20% to the total cosmic BH growth. For BH growth in IR galaxies, the model predicts that the majority of BH growth at z>1 occurs in infrared luminous galaxies and the AGN fraction as a function of IR survey is a strong function of the survey depth, ranging from >50% at bright end to below 10% at faint end. We also evaluates various AGN selection techniques at X-ray and IR wavelengths and offer predictions for future missions at X-ray and IR.

  7. More realistic power estimation for new user, active comparator studies: an empirical example.

    PubMed

    Gokhale, Mugdha; Buse, John B; Pate, Virginia; Marquis, M Alison; Stürmer, Til

    2016-04-01

    Pharmacoepidemiologic studies are often expected to be sufficiently powered to study rare outcomes, but there is sequential loss of power with implementation of study design options minimizing bias. We illustrate this using a study comparing pancreatic cancer incidence after initiating dipeptidyl-peptidase-4 inhibitors (DPP-4i) versus thiazolidinediones or sulfonylureas. We identified Medicare beneficiaries with at least one claim of DPP-4i or comparators during 2007-2009 and then applied the following steps: (i) exclude prevalent users, (ii) require a second prescription of same drug, (iii) exclude prevalent cancers, (iv) exclude patients age <66 years and (v) censor for treatment changes during follow-up. Power to detect hazard ratios (effect measure strongly driven by the number of events) ≥ 2.0 estimated after step 5 was compared with the naïve power estimated prior to step 1. There were 19,388 and 28,846 DPP-4i and thiazolidinedione initiators during 2007-2009. The number of drug initiators dropped most after requiring a second prescription, outcomes dropped most after excluding patients with prevalent cancer and person-time dropped most after requiring a second prescription and as-treated censoring. The naïve power (>99%) was considerably higher than the power obtained after the final step (~75%). In designing new-user active-comparator studies, one should be mindful how steps minimizing bias affect sample-size, number of outcomes and person-time. While actual numbers will depend on specific settings, application of generic losses in percentages will improve estimates of power compared with the naive approach mostly ignoring steps taken to increase validity. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Rejoice in the hubris: useful things biologists could do for physicists

    NASA Astrophysics Data System (ADS)

    Austin, Robert H.

    2014-10-01

    Political correctness urges us to state how wonderful it is to work with biologists and how, just as the lion will someday lie down with the lamb, so will interdisciplinary work, where biologists and physicists are mixed together in light, airy buildings designed to force socialization, give rise to wonderful new science. But it has been said that the only drive in human nature stronger than the sex drive is the drive to censor and suppress, and so I claim that it is OK for physicists and biologists to maintain a wary distance from each other, so that neither one censors or suppresses the wild ideas of the other.

  9. Survival Data and Regression Models

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.

  10. Statistical methods for astronomical data with upper limits. I - Univariate distributions

    NASA Technical Reports Server (NTRS)

    Feigelson, E. D.; Nelson, P. I.

    1985-01-01

    The statistical treatment of univariate censored data is discussed. A heuristic derivation of the Kaplan-Meier maximum-likelihood estimator from first principles is presented which results in an expression amenable to analytic error analysis. Methods for comparing two or more censored samples are given along with simple computational examples, stressing the fact that most astronomical problems involve upper limits while the standard mathematical methods require lower limits. The application of univariate survival analysis to six data sets in the recent astrophysical literature is described, and various aspects of the use of survival analysis in astronomy, such as the limitations of various two-sample tests and the role of parametric modelling, are discussed.

  11. Rejoice in the hubris: useful things biologists could do for physicists.

    PubMed

    Austin, Robert H

    2014-10-08

    Political correctness urges us to state how wonderful it is to work with biologists and how, just as the lion will someday lie down with the lamb, so will interdisciplinary work, where biologists and physicists are mixed together in light, airy buildings designed to force socialization, give rise to wonderful new science. But it has been said that the only drive in human nature stronger than the sex drive is the drive to censor and suppress, and so I claim that it is OK for physicists and biologists to maintain a wary distance from each other, so that neither one censors or suppresses the wild ideas of the other.

  12. A machine learning approach to galaxy-LSS classification - I. Imprints on halo merger trees

    NASA Astrophysics Data System (ADS)

    Hui, Jianan; Aragon, Miguel; Cui, Xinping; Flegal, James M.

    2018-04-01

    The cosmic web plays a major role in the formation and evolution of galaxies and defines, to a large extent, their properties. However, the relation between galaxies and environment is still not well understood. Here, we present a machine learning approach to study imprints of environmental effects on the mass assembly of haloes. We present a galaxy-LSS machine learning classifier based on galaxy properties sensitive to the environment. We then use the classifier to assess the relevance of each property. Correlations between galaxy properties and their cosmic environment can be used to predict galaxy membership to void/wall or filament/cluster with an accuracy of 93 per cent. Our study unveils environmental information encoded in properties of haloes not normally considered directly dependent on the cosmic environment such as merger history and complexity. Understanding the physical mechanism by which the cosmic web is imprinted in a halo can lead to significant improvements in galaxy formation models. This is accomplished by extracting features from galaxy properties and merger trees, computing feature scores for each feature and then applying support vector machine (SVM) to different feature sets. To this end, we have discovered that the shape and depth of the merger tree, formation time, and density of the galaxy are strongly associated with the cosmic environment. We describe a significant improvement in the original classification algorithm by performing LU decomposition of the distance matrix computed by the feature vectors and then using the output of the decomposition as input vectors for SVM.

  13. Global morphology of ionospheric F-layer scintillations using FS3/COSMIC GPS radio occultation data

    NASA Astrophysics Data System (ADS)

    Tsai, Lung-Chih; Su, Shin-Yi

    2016-07-01

    The FormoSat-3/ Constellation Observing System for Meteorology, Ionosphere and Climate (FS3/COSMIC) has been proven a successful mission on profiling and modeling of ionospheric electron density by the radio occultation (RO) technique. In this study we report FS3/COSMIC limb-viewing observations of the GPS L-band scintillation since mid 2006 and propose to study F-layer irregularity morphology. Generally the FS3/COSMIC has performed >1000 ionospheric RO observations per day. Most of these observations can provide limb-viewing profiles of S4 scintillation index at dual L-band frequencies. There are a few percentage of FS3/COSMIC RO observations having >0.08 S4 values on average. However, seven identified areas at Central Pacific Area (-20∘~ 20∘dip latitude, 160∘E~130∘W), South American Area (-20∘~ 20∘dip latitude, 100∘W~30∘W), African Area (-20∘~ 20∘dip latitude, 30∘W~50∘E), European Area (30∘~55∘N, 0∘~55∘E), Japan See Area (35∘~55∘N, 120∘~150∘E), Arctic Area (> 65∘dip latitude), and Antarctic Area (< -65∘dip latitude) have been designated to have much higher percentage of strong L-band RO scintillation. During these years in most of the last sunspot cycle from mid 2006 to end 2014 the climatology of scintillations, namely, its variations with each identified area, season, local time, magnetic activity and solar activity have been documented.

  14. On the connectivity of the cosmic web: theory and implications for cosmology and galaxy formation

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Pogosyan, Dmitri; Pichon, Christophe

    2018-06-01

    Cosmic connectivity and multiplicity, i.e. the number of filaments globally or locally connected to a given cluster is a natural probe of the growth of structure and in particular of the nature of dark energy. It is also a critical ingredient driving the assembly history of galaxies as it controls mass and angular momentum accretion. The connectivity of the cosmic web is investigated here via the persistent skeleton. This tool identifies topologically the ridges of the cosmic landscape which allows us to investigate how the nodes of the cosmic web are connected together. When applied to Gaussian random fields corresponding to the high redshift universe, it is found that on average the nodes are connected to exactly κ = 4 neighbours in two dimensions and ˜6.1 in three dimensions. Investigating spatial dimensions up to d = 6, typical departures from a cubic lattice κ = 2d are shown to scale like the power 7/4 of the dimension. These numbers strongly depend on the height of the peaks: the higher the peak the larger the connectivity. Predictions from first principles based on peak theory are shown to reproduce well the connectivity and multiplicity of Gaussian random fields and cosmological simulations. As an illustration, connectivity is quantified in galaxy lensing convergence maps and large dark haloes catalogues. As a function of redshift and scale the mean connectivity decreases in a cosmology-dependent way. As a function of halo mass it scales like 10/3 times the log of the mass. Implications on galactic scales are discussed.

  15. Cosmic microwave background theory

    PubMed Central

    Bond, J. Richard

    1998-01-01

    A long-standing goal of theorists has been to constrain cosmological parameters that define the structure formation theory from cosmic microwave background (CMB) anisotropy experiments and large-scale structure (LSS) observations. The status and future promise of this enterprise is described. Current band-powers in ℓ-space are consistent with a ΔT flat in frequency and broadly follow inflation-based expectations. That the levels are ∼(10−5)2 provides strong support for the gravitational instability theory, while the Far Infrared Absolute Spectrophotometer (FIRAS) constraints on energy injection rule out cosmic explosions as a dominant source of LSS. Band-powers at ℓ ≳ 100 suggest that the universe could not have re-ionized too early. To get the LSS of Cosmic Background Explorer (COBE)-normalized fluctuations right provides encouraging support that the initial fluctuation spectrum was not far off the scale invariant form that inflation models prefer: e.g., for tilted Λ cold dark matter sequences of fixed 13-Gyr age (with the Hubble constant H0 marginalized), ns = 1.17 ± 0.3 for Differential Microwave Radiometer (DMR) only; 1.15 ± 0.08 for DMR plus the SK95 experiment; 1.00 ± 0.04 for DMR plus all smaller angle experiments; 1.00 ± 0.05 when LSS constraints are included as well. The CMB alone currently gives weak constraints on Λ and moderate constraints on Ωtot, but theoretical forecasts of future long duration balloon and satellite experiments are shown which predict percent-level accuracy among a large fraction of the 10+ parameters characterizing the cosmic structure formation theory, at least if it is an inflation variant. PMID:9419321

  16. Black hole remnants and the information loss paradox

    NASA Astrophysics Data System (ADS)

    Chen, P.; Ong, Y. C.; Yeom, D.-h.

    2015-11-01

    Forty years after the discovery of Hawking radiation, its exact nature remains elusive. If Hawking radiation does not carry any information out from the ever shrinking black hole, it seems that unitarity is violated once the black hole completely evaporates. On the other hand, attempts to recover information via quantum entanglement lead to the firewall controversy. Amid the confusions, the possibility that black hole evaporation stops with a "remnant" has remained unpopular and is often dismissed due to some "undesired properties" of such an object. Nevertheless, as in any scientific debate, the pros and cons of any proposal must be carefully scrutinized. We fill in the void of the literature by providing a timely review of various types of black hole remnants, and provide some new thoughts regarding the challenges that black hole remnants face in the context of the information loss paradox and its latest incarnation, namely the firewall controversy. The importance of understanding the role of curvature singularity is also emphasized, after all there remains a possibility that the singularity cannot be cured even by quantum gravity. In this context a black hole remnant conveniently serves as a cosmic censor. We conclude that a remnant remains a possible end state of Hawking evaporation, and if it contains large interior geometry, may help to ameliorate the information loss paradox and the firewall controversy. We hope that this will raise some interests in the community to investigate remnants more critically but also more thoroughly.

  17. Strong earthquakes, novae and cosmic ray environment

    NASA Technical Reports Server (NTRS)

    Yu, Z. D.

    1985-01-01

    Observations about the relationship between seismic activity and astronomical phenomena are discussed. First, after investigating the seismic data (magnitude 7.0 and over) with the method of superposed epochs it is found that world seismicity evidently increased after the occurring of novae with apparent magnitude brighter than 2.2. Second, a great many earthquakes of magnitude 7.0 and over occurred in the 13th month after two of the largest ground level solar cosmic ray events (GLEs). The causes of three high level phenomena of global seismic activity in 1918-1965 can be related to these, and it is suggested that according to the information of large GLE or bright nova predictions of the times of global intense seismic activity can be made.

  18. Ion implantation effects in 'cosmic' dust grains

    NASA Technical Reports Server (NTRS)

    Bibring, J. P.; Langevin, Y.; Maurette, M.; Meunier, R.; Jouffrey, B.; Jouret, C.

    1974-01-01

    Cosmic dust grains, whatever their origin may be, have probably suffered a complex sequence of events including exposure to high doses of low-energy nuclear particles and cycles of turbulent motions. High-voltage electron microscope observations of micron-sized grains either naturally exposed to space environmental parameters on the lunar surface or artificially subjected to space simulated conditions strongly suggest that such events could drastically modify the mineralogical composition of the grains and considerably ease their aggregation during collisions at low speeds. Furthermore, combined mass spectrometer and ionic analyzer studies show that small carbon compounds can be both synthesized during the implantation of a mixture of low-energy D, C, N ions in various solids and released in space by ion sputtering.

  19. Starobinsky-like inflation, supercosmology and neutrino masses in no-scale flipped SU(5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, John; Garcia, Marcos A.G.; Nagata, Natsumi

    2017-07-01

    We embed a flipped SU(5) × U(1) GUT model in a no-scale supergravity framework, and discuss its predictions for cosmic microwave background observables, which are similar to those of the Starobinsky model of inflation. Measurements of the tilt in the spectrum of scalar perturbations in the cosmic microwave background, n {sub s} , constrain significantly the model parameters. We also discuss the model's predictions for neutrino masses, and pay particular attention to the behaviours of scalar fields during and after inflation, reheating and the GUT phase transition. We argue in favor of strong reheating in order to avoid excessive entropymore » production which could dilute the generated baryon asymmetry.« less

  20. The International X-ray Observatory: Science Prospects and Technology Challenges

    NASA Technical Reports Server (NTRS)

    Petre, Robert

    2008-01-01

    This talk will describe the International X-ray Observatory (IXO) mission. It will present the key scientific goals, relating to strong gravity, cosmic feedback, and the life cycle of matter. The mission configuration will be described. Emphasis will be placed on the design and anticipated implementation of the X-ray mirror system.

  1. Likelihoods for fixed rank nomination networks

    PubMed Central

    HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE

    2014-01-01

    Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586

  2. Inference for the effect of treatment on survival probability in randomized trials with noncompliance and administrative censoring.

    PubMed

    Nie, Hui; Cheng, Jing; Small, Dylan S

    2011-12-01

    In many clinical studies with a survival outcome, administrative censoring occurs when follow-up ends at a prespecified date and many subjects are still alive. An additional complication in some trials is that there is noncompliance with the assigned treatment. For this setting, we study the estimation of the causal effect of treatment on survival probability up to a given time point among those subjects who would comply with the assignment to both treatment and control. We first discuss the standard instrumental variable (IV) method for survival outcomes and parametric maximum likelihood methods, and then develop an efficient plug-in nonparametric empirical maximum likelihood estimation (PNEMLE) approach. The PNEMLE method does not make any assumptions on outcome distributions, and makes use of the mixture structure in the data to gain efficiency over the standard IV method. Theoretical results of the PNEMLE are derived and the method is illustrated by an analysis of data from a breast cancer screening trial. From our limited mortality analysis with administrative censoring times 10 years into the follow-up, we find a significant benefit of screening is present after 4 years (at the 5% level) and this persists at 10 years follow-up. © 2011, The International Biometric Society.

  3. Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.

    PubMed

    Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi

    2013-12-01

    Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.

  4. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  5. Exploration of the use of Bayesian modeling of gradients for censored spatiotemporal data from the Deepwater Horizon oil spill

    PubMed Central

    Quick, Harrison; Groth, Caroline; Banerjee, Sudipto; Carlin, Bradley P.; Stenzel, Mark R.; Stewart, Patricia A.; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.

    2014-01-01

    Summary This paper develops a hierarchical framework for identifying spatiotemporal patterns in data with a high degree of censoring using the gradient process. To do this, we impute censored values using a sampling-based inverse CDF method within our Markov chain Monte Carlo algorithm, thereby avoiding burdensome integration and facilitating efficient estimation of other model parameters. We illustrate use of our methodology using a simulated data example, and uncover the danger of simply substituting a space- and time-constant function of the level of detection for all missing values. We then fit our model to area measurement data of volatile organic compounds (VOC) air concentrations collected on vessels supporting the response and clean-up efforts of the Deepwater Horizon oil release that occurred starting April 20, 2010. These data contained a high percentage of observations below the detectable limits of the measuring instrument. Despite this, we were still able to make some interesting discoveries, including elevated levels of VOC near the site of the oil well on June 26th. Using the results from this preliminary analysis, we hope to inform future research on the Deepwater Horizon study, including the use of gradient methods for assigning workers to exposure categories. PMID:25599019

  6. The Statistical Value of Raw Fluorescence Signal in Luminex xMAP Based Multiplex Immunoassays

    PubMed Central

    Breen, Edmond J.; Tan, Woei; Khan, Alamgir

    2016-01-01

    Tissue samples (plasma, saliva, serum or urine) from 169 patients classified as either normal or having one of seven possible diseases are analysed across three 96-well plates for the presences of 37 analytes using cytokine inflammation multiplexed immunoassay panels. Censoring for concentration data caused problems for analysis of the low abundant analytes. Using fluorescence analysis over concentration based analysis allowed analysis of these low abundant analytes. Mixed-effects analysis on the resulting fluorescence and concentration responses reveals a combination of censoring and mapping the fluorescence responses to concentration values, through a 5PL curve, changed observed analyte concentrations. Simulation verifies this, by showing a dependence on the mean florescence response and its distribution on the observed analyte concentration levels. Differences from normality, in the fluorescence responses, can lead to differences in concentration estimates and unreliable probabilities for treatment effects. It is seen that when fluorescence responses are normally distributed, probabilities of treatment effects for fluorescence based t-tests has greater statistical power than the same probabilities from concentration based t-tests. We add evidence that the fluorescence response, unlike concentration values, doesn’t require censoring and we show with respect to differential analysis on the fluorescence responses that background correction is not required. PMID:27243383

  7. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    PubMed

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Evaluation of High-Throughput Chemical Exposure Models ...

    EPA Pesticide Factsheets

    The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer parent chemical exposures from biomonitoring measurements and forward models to predict multi-pathway exposures from chemical use information and/or residential media concentrations. Here, both forward and reverse modeling methods are used to characterize the relationship between matched near-field environmental (air and dust) and biomarker measurements. Indoor air, house dust, and urine samples from a sample of 120 females (aged 60 to 80 years) were analyzed. In the measured data, 78% of the residential media measurements (across 80 chemicals) and 54% of the urine measurements (across 21 chemicals) were censored, i.e. below the limit of quantification (LOQ). Because of the degree of censoring, we applied a Bayesian approach to impute censored values for 69 chemicals having at least 15% of measurements above LOQ. This resulted in 10 chemicals (5 phthalates, 5 pesticides) with matched air, dust, and urine metabolite measurements. The population medians of indoor air and dust concentrations were compared to population median exposures inferred from urine metabolites concentrations using a high-throughput reverse-dosimetry approach. Median air and dust concentrations were found to be correl

  9. Detailed numerical investigation of the Bohm limit in cosmic ray diffusion theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hussein, M.; Shalchi, A., E-mail: m_hussein@physics.umanitoba.ca, E-mail: andreasm4@yahoo.com

    2014-04-10

    A standard model in cosmic ray diffusion theory is the so-called Bohm limit in which the particle mean free path is assumed to be equal to the Larmor radius. This type of diffusion is often employed to model the propagation and acceleration of energetic particles. However, recent analytical and numerical work has shown that standard Bohm diffusion is not realistic. In the present paper, we perform test-particle simulations to explore particle diffusion in the strong turbulence limit in which the wave field is much stronger than the mean magnetic field. We show that there is indeed a lower limit ofmore » the particle mean free path along the mean field. In this limit, the mean free path is directly proportional to the unperturbed Larmor radius like in the traditional Bohm limit, but it is reduced by the factor δB/B {sub 0} where B {sub 0} is the mean field and δB the turbulent field. Although we focus on parallel diffusion, we also explore diffusion across the mean field in the strong turbulence limit.« less

  10. Overall Graft Loss Versus Death-Censored Graft Loss: Unmasking the Magnitude of Racial Disparities in Outcomes Among US Kidney Transplant Recipients.

    PubMed

    Taber, David J; Gebregziabher, Mulugeta; Payne, Elizabeth H; Srinivas, Titte; Baliga, Prabhakar K; Egede, Leonard E

    2017-02-01

    Black kidney transplant recipients experience disproportionately high rates of graft loss. This disparity has persisted for 40 years, and improvements may be impeded based on the current public reporting of overall graft loss by US regulatory organizations for transplantation. Longitudinal cohort study of kidney transplant recipients using a data set created by linking Veterans Affairs and US Renal Data System information, including 4918 veterans transplanted between January 2001 and December 2007, with follow-up through December 2010. Multivariable analysis was conducted using 2-stage joint modeling of random and fixed effects of longitudinal data (linear mixed model) with time to event outcomes (Cox regression). Three thousand three hundred six non-Hispanic whites (67%) were compared with 1612 non-Hispanic black (33%) recipients with 6.0 ± 2.2 years of follow-up. In the unadjusted analysis, black recipients were significantly more likely to have overall graft loss (hazard ratio [HR], 1.19; 95% confidence interval [95% CI], 1.07-1.33), death-censored graft loss (HR, 1.67; 95% CI, 1.45-1.92), and lower mortality (HR, 0.83; 95% CI, 0.72-0.96). In fully adjusted models, only death-censored graft loss remained significant (HR, 1.38; 95% CI, 1.12-1.71; overall graft loss [HR, 1.08; 95% CI, 0.91-1.28]; mortality [HR, 0.84; 95% CI, 0.67-1.06]). A composite definition of graft loss reduced the magnitude of disparities in blacks by 22%. Non-Hispanic black kidney transplant recipients experience a substantial disparity in graft loss, but not mortality. This study of US data provides evidence to suggest that researchers should focus on using death-censored graft loss as the primary outcome of interest to facilitate a better understanding of racial disparities in kidney transplantation.

  11. Post-Transplant Hypophosphatemia and the Risk of Death-Censored Graft Failure and Mortality after Kidney Transplantation.

    PubMed

    van Londen, Marco; Aarts, Brigitte M; Deetman, Petronella E; van der Weijden, Jessica; Eisenga, Michele F; Navis, Gerjan; Bakker, Stephan J L; de Borst, Martin H

    2017-08-07

    Hypophosphatemia is common in the first year after kidney transplantation, but its clinical implications are unclear. We investigated the relationship between the severity of post-transplant hypophosphatemia and mortality or death-censored graft failure in a large cohort of renal transplant recipients with long-term follow-up. We performed a longitudinal cohort study in 957 renal transplant recipients who were transplanted between 1993 and 2008 at a single center. We used a large real-life dataset containing 28,178 phosphate measurements (median of 27; first to third quartiles, 23-34) serial measurements per patient) and selected the lowest intraindividual phosphate level during the first year after transplantation. The primary outcomes were all-cause mortality, cardiovascular mortality, and death-censored graft failure. The median (interquartile range) intraindividual lowest phosphate level was 1.58 (1.30-1.95) mg/dl, and it was reached at 33 (21-51) days post-transplant. eGFR was the main correlate of the lowest serum phosphate level (model R 2 =0.32). During 9 (5-12) years of follow-up, 181 (19%) patients developed graft failure, and 295 (35%) patients died, of which 94 (32%) deaths were due to cardiovascular disease. In multivariable Cox regression analysis, more severe hypophosphatemia was associated with a lower risk of death-censored graft failure (fully adjusted hazard ratio, 0.61; 95% confidence interval, 0.43 to 0.88 per 1 mg/dl lower serum phosphate) and cardiovascular mortality (fully adjusted hazard ratio, 0.37; 95% confidence interval, 0.22 to 0.62) but not noncardiovascular mortality (fully adjusted hazard ratio, 1.33; 95% confidence interval, 0.9 to 1.96) or all-cause mortality (fully adjusted hazard ratio, 1.15; 95% confidence interval, 0.81 to 1.61). Post-transplant hypophosphatemia develops early after transplantation. These data connect post-transplant hypophosphatemia with favorable long-term graft and patient outcomes. Copyright © 2017 by the American Society of Nephrology.

  12. Propensity score matching and persistence correction to reduce bias in comparative effectiveness: the effect of cinacalcet use on all-cause mortality.

    PubMed

    Gillespie, Iain A; Floege, Jürgen; Gioni, Ioanna; Drüeke, Tilman B; de Francisco, Angel L; Anker, Stefan D; Kubo, Yumi; Wheeler, David C; Froissart, Marc

    2015-07-01

    The generalisability of randomised controlled trials (RCTs) may be limited by restrictive entry criteria or by their experimental nature. Observational research can provide complementary findings but is prone to bias. Employing propensity score matching, to reduce such bias, we compared the real-life effect of cinacalcet use on all-cause mortality (ACM) with findings from the Evaluation of Cinacalcet Therapy to Lower Cardiovascular Events (EVOLVE) RCT in chronic haemodialysis patients. Incident adult haemodialysis patients receiving cinacalcet, recruited in a prospective observational cohort from 2007-2009 (AROii; n = 10,488), were matched to non-exposed patients regardless of future exposure status. The effect of treatment crossover was investigated with inverse probability of censoring weighted and lag-censored analyses. EVOLVE ACM data were analysed largely as described for the primary composite endpoint. AROii patients receiving cinacalcet (n = 532) were matched to 1790 non-exposed patients. The treatment effect of cinacalcet on ACM in the main AROii analysis (hazard ratio 1.03 [95% confidence interval (CI) 0.78-1.35]) was closer to the null than for the Intention to Treat (ITT) analysis of EVOLVE (0.94 [95%CI 0.85-1.04]). Adjusting for non-persistence by 0- and 6-month lag-censoring and by inverse probability of censoring weight, the hazard ratios in AROii (0.76 [95%CI 0.51-1.15], 0.84 [95%CI 0.60-1.18] and 0.79 [95%CI 0.56-1.11], respectively) were comparable with those of EVOLVE (0.82 [95%CI 0.67-1.01], 0.83 [95%CI 0.73-0.96] and 0.87 [95%CI 0.71-1.06], respectively). Correcting for treatment crossover, we observed results in the 'real-life' setting of the AROii observational cohort that closely mirrored the results of the EVOLVE RCT. Persistence-corrected analyses revealed a trend towards reduced ACM in haemodialysis patients receiving cinacalcet therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Cosmic vacuum and galaxy formation

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.

    2006-04-01

    It is demonstrated that the protogalactic perturbations must enter the nonlinear regime before the red shift z≈ 1; otherwise they would be destroyed by the antigravity of the vacuum dark energy at the subsequent epoch of the vacuum domination. At the zrrV={M/[(8π/3)ρV]}1/3, where M is the mass of a given over-density and ρV is the vacuum density. The criterion provides a new relation between the largest mass condensations and their spatial scales. All the real large-scale systems follow this relation definitely. It is also shown that a simple formula is possible for the key quantity in the theory of galaxy formation, namely the initial amplitude of the perturbation of the gravitational potential in the protogalactic structures. The amplitude is time independent and given in terms of the Friedmann integrals, which are genuine physical characteristics of the cosmic energies. The results suggest that there is a strong correspondence between the global design of the Universe as a whole and the cosmic structures of various masses and spatial scales.

  14. The formation of cosmic structure in a texture-seeded cold dark matter cosmogony

    NASA Technical Reports Server (NTRS)

    Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III

    1992-01-01

    The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.

  15. The little sibling of the big rip singularity

    NASA Astrophysics Data System (ADS)

    Bouhmadi-López, Mariam; Errahmani, Ahmed; Martín-Moruno, Prado; Ouali, Taoufik; Tavakoli, Yaser

    2015-07-01

    In this paper, we present a new cosmological event, which we named the little sibling of the big rip. This event is much smoother than the big rip singularity. When the little sibling of the big rip is reached, the Hubble rate and the scale factor blow up, but the cosmic derivative of the Hubble rate does not. This abrupt event takes place at an infinite cosmic time where the scalar curvature explodes. We show that a doomsday à la little sibling of the big rip is compatible with an accelerating universe, indeed at present it would mimic perfectly a ΛCDM scenario. It turns out that, even though the event seems to be harmless as it takes place in the infinite future, the bound structures in the universe would be unavoidably destroyed on a finite cosmic time from now. The model can be motivated by considering that the weak energy condition should not be strongly violated in our universe, and it could give us some hints about the status of recently formulated nonlinear energy conditions.

  16. Diffuse gamma-ray emission from self-confined cosmic rays around Galactic sources

    NASA Astrophysics Data System (ADS)

    D'Angelo, Marta; Morlino, Giovanni; Amato, Elena; Blasi, Pasquale

    2018-02-01

    The propagation of particles accelerated at supernova remnant shocks and escaping the parent remnants is likely to proceed in a strongly non-linear regime, due to the efficient self-generation of Alfvén waves excited through streaming instability near the sources. Depending on the amount of neutral hydrogen present in the regions around the sites of supernova explosions, cosmic rays may accumulate an appreciable grammage in the same regions and get self-confined for non-negligible times, which in turn results in an enhanced rate of production of secondaries. Here we calculate the contribution to the diffuse gamma-ray background due to the overlap along lines of sight of several of these extended haloes as due to pion production induced by self-confined cosmic rays. We find that if the density of neutrals is low, the haloes can account for a substantial fraction of the diffuse emission observed by Fermi-Large Area Telescope (LAT), depending on the orientation of the line of sight with respect to the direction of the Galactic Centre.

  17. Messengers from the Early Solar System - Comets as Carriers of Cosmic Information

    NASA Technical Reports Server (NTRS)

    Mumma, Michael J.

    2011-01-01

    Viewed from a cosmic perspective, Earth is a dry planet yet its oceans are enriched in deuterium by a large factor relative to nebular hydrogen. Can comets have delivered Earth s water? The question of exogenous delivery of water and organics to Earth and other young planets is of critical importance for understanding the origin of Earth s water, and for assessing the possible existence of exo-planets similar to Earth. Strong gradients in temperature and chemistry in the proto-planetary disk, coupled with dynamical models, imply that comets from the Oort Cloud and Kuiper Disk reservoirs should have diverse composition. The primary volatiles in comets (ices native to the nucleus) provide the preferred metric, and taxonomies based on them are now beginning to emerge [1, 2, 3]. The measurement of cosmic parameters such as the nuclear spin temperatures for H2O, NH3, and CH4, and of enrichment factors for isotopologues (D/H in water and hydrogen cyanide, N-14/N-15 in CN and hydrogen cyanide) provide additional important tests for the origin of cometary material.

  18. THE HIGHEST-ENERGY COSMIC RAYS CANNOT BE DOMINANTLY PROTONS FROM STEADY SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Ke; Kotera, Kumiko

    The bulk of observed ultrahigh-energy cosmic rays could be light or heavier elements and originate from an either steady or transient population of sources. This leaves us with four general categories of sources. Energetic requirements set a lower limit on single-source luminosities, while the distribution of particle arrival directions in the sky sets a lower limit on the source number density. The latter constraint depends on the angular smearing in the skymap due to the magnetic deflections of the charged particles during their propagation from the source to the Earth. We contrast these limits with the luminosity functions from surveysmore » of existing luminous steady objects in the nearby universe and strongly constrain one of the four categories of source models, namely, steady proton sources. The possibility that cosmic rays with energy >8 × 10{sup 19} eV are dominantly pure protons coming from steady sources is excluded at 95% confidence level, under the safe assumption that protons experience less than 30° magnetic deflection on flight.« less

  19. Fate of inflation and the natural reduction of vacuum energy

    NASA Astrophysics Data System (ADS)

    Nakamichi, Akika; Morikawa, Masahiro

    2014-04-01

    In the standard cosmology, an artificial fine tuning of the potential is inevitable for vanishing cosmological constant, though slow-rolling uniform scalar field easily causes cosmic inflation. We focus on the general fact that any potential with negative region can temporally halt the cosmic expansion at the end of inflation, where the field tends to diverge. This violent evolution naturally causes particle production and strong instability of the uniform configuration of the fields. Decaying of this uniform scalar field would leave vanishing cosmological constant as well as locally collapsed objects. The universe then continues to evolve into the standard Freedman model. We study the detail of the instability, based on the linear analysis, and the subsequent fate of the scalar field, based on the non-linear numerical analysis. The collapsed scalar field would easily exceed the Kaup limiting mass and forms primordial black holes, which may play an important role in galaxy formation in later stages of cosmic expansion. We systematically describe the above scenario by identifying the scalar field as the boson field condensation (BEC) and the inflation as the process of phase transition of them.

  20. Isotopic excesses of proton-rich nuclei related to space weathering observed in a gas-rich meteorite Kapoeta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hidaka, Hiroshi; Yoneda, Shigekazu, E-mail: hidaka@hiroshima-u.ac.jp, E-mail: s-yoneda@kahaku.go.jp

    2014-05-10

    The idea that solar system materials were irradiated by solar cosmic rays from the early Sun has long been suggested, but is still questionable. In this study, Sr, Ba, Ce, Nd, Sm, and Gd isotopic compositions of sequential acid leachates from the Kapoeta meteorite (howardite) were determined to find systematic and correlated variations in their isotopic abundances of proton-rich nuclei, leading to an understanding of the irradiation condition by cosmic rays. Significantly large excesses of proton-rich isotopes (p-isotopes), {sup 84}Sr, {sup 130}Ba, {sup 132}Ba, {sup 136}Ce, {sup 138}Ce, and {sup 144}Sm, were observed, particularly in the first chemical separate, whichmore » possibly leached out of the very shallow layer within a few μm from the surface of regolith grains in the sample. The results reveal the production of p-isotopes through the interaction of solar cosmic rays with the superficial region of the regolith grains before the formation of the Kapoeta meteorite parent body, suggesting strong activity in the early Sun.« less

  1. Probing the origin of cosmic rays with extremely high energy neutrinos using the IceCube Observatory

    NASA Astrophysics Data System (ADS)

    Aartsen, M. G.; Abbasi, R.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Altmann, D.; Arguelles, C.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brayeur, L.; Bretz, H.-P.; Brown, A. M.; Bruijn, R.; Casey, J.; Casier, M.; Chirkin, D.; Christov, A.; Christy, B.; Clark, K.; Clevermann, F.; Coenders, S.; Cohen, S.; Cowen, D. F.; Cruz Silva, A. H.; Danninger, M.; Daughhetee, J.; Davis, J. C.; Day, M.; De Clercq, C.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; Dunkman, M.; Eagan, R.; Eberhardt, B.; Eisch, J.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Flis, S.; Franckowiak, A.; Frantzen, K.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Goodman, J. A.; Góra, D.; Grandmont, D. T.; Grant, D.; Gretskov, P.; Groh, J. C.; Groß, A.; Ha, C.; Haj Ismail, A.; Hallen, P.; Hallgren, A.; Halzen, F.; Hanson, K.; Heereman, D.; Heinen, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Homeier, A.; Hoshina, K.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobi, E.; Jacobsen, J.; Jagielski, K.; Japaridze, G. S.; Jero, K.; Jlelati, O.; Kaminsky, B.; Kappes, A.; Karg, T.; Karle, A.; Kauer, M.; Kelley, J. L.; Kiryluk, J.; Kläs, J.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krasberg, M.; Kriesten, A.; Krings, K.; Kroll, G.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Landsman, H.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leute, J.; Lünemann, J.; Macías, O.; Madsen, J.; Maggi, G.; Maruyama, R.; Mase, K.; Matis, H. S.; McNally, F.; Meagher, K.; Merck, M.; Meures, T.; Miarecki, S.; Middell, E.; Milke, N.; Miller, J.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke, A.; Odrowski, S.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Rädel, L.; Rameez, M.; Rawlins, K.; Redl, P.; Reimann, R.; Resconi, E.; Rhode, W.; Ribordy, M.; Richman, M.; Riedel, B.; Rodrigues, J. P.; Rott, C.; Ruhe, T.; Ruzybayev, B.; Ryckbosch, D.; Saba, S. M.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Scheriau, F.; Schmidt, T.; Schmitz, M.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schukraft, A.; Schulte, L.; Schulz, O.; Seckel, D.; Sestayo, Y.; Seunarine, S.; Shanidze, R.; Sheremata, C.; Smith, M. W. E.; Soldin, D.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stanisha, N. A.; Stasik, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strahler, E. A.; Ström, R.; Sullivan, G. W.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Tešić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Unger, E.; Usner, M.; Vallecorsa, S.; van Eijndhoven, N.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Weaver, Ch.; Wellons, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Ziemann, J.; Zierke, S.; Zoll, M.

    2013-12-01

    We have searched for extremely high energy neutrinos using data taken with the IceCube detector between May 2010 and May 2012. Two neutrino-induced particle shower events with energies around 1 PeV were observed, as reported previously. In this work, we investigate whether these events could originate from cosmogenic neutrinos produced in the interactions of ultrahigh energy cosmic rays with ambient photons while propagating through intergalactic space. Exploiting IceCube’s large exposure for extremely high energy neutrinos and the lack of observed events above 100 PeV, we can rule out the corresponding models at more than 90% confidence level. The model-independent quasidifferential 90% C.L. upper limit, which amounts to E2ϕνe+νμ+ντ=1.2×10-7GeVcm-2s-1sr-1 at 1 EeV, provides the most stringent constraint in the energy range from 10 PeV to 10 EeV. Our observation disfavors strong cosmological evolution of the highest energy cosmic-ray sources such as the Fanaroff-Riley type II class of radio galaxies.

  2. The collapse of a molecular cloud core to stellar densities using radiation non-ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Wurster, James; Bate, Matthew R.; Price, Daniel J.

    2018-04-01

    We present results from radiation non-ideal magnetohydrodynamics (MHD) calculations that follow the collapse of rotating, magnetized, molecular cloud cores to stellar densities. These are the first such calculations to include all three non-ideal effects: ambipolar diffusion, Ohmic resistivity, and the Hall effect. We employ an ionization model in which cosmic ray ionization dominates at low temperatures and thermal ionization takes over at high temperatures. We explore the effects of varying the cosmic ray ionization rate from ζcr = 10-10 to 10-16 s-1. Models with ionization rates ≳10-12 s-1 produce results that are indistinguishable from ideal MHD. Decreasing the cosmic ray ionization rate extends the lifetime of the first hydrostatic core up to a factor of 2, but the lifetimes are still substantially shorter than those obtained without magnetic fields. Outflows from the first hydrostatic core phase are launched in all models, but the outflows become broader and slower as the ionization rate is reduced. The outflow morphology following stellar core formation is complex and strongly dependent on the cosmic ray ionization rate. Calculations with high ionization rates quickly produce a fast (≈14 km s-1) bipolar outflow that is distinct from the first core outflow, but with the lowest ionization rate, a slower (≈3-4 km s-1) conical outflow develops gradually and seamlessly merges into the first core outflow.

  3. Cosmic distribution of highly ionized metals and their physical conditions in the EAGLE simulations

    NASA Astrophysics Data System (ADS)

    Rahmati, Alireza; Schaye, Joop; Crain, Robert A.; Oppenheimer, Benjamin D.; Schaller, Matthieu; Theuns, Tom

    2016-06-01

    We study the distribution and evolution of highly ionized intergalactic metals in the Evolution and Assembly of Galaxies and their Environment (EAGLE) cosmological, hydrodynamical simulations. EAGLE has been shown to reproduce a wide range of galaxy properties while its subgrid feedback was calibrated without considering gas properties. We compare the predictions for the column density distribution functions (CDDFs) and cosmic densities of Si IV, C IV, N V, O VI and Ne VIII absorbers with observations at redshift z = 0 to ˜6 and find reasonable agreement, although there are some differences. We show that the typical physical densities of the absorbing gas increase with column density and redshift, but decrease with the ionization energy of the absorbing ion. The typical metallicity increases with both column density and time. The fraction of collisionally ionized metal absorbers increases with time and ionization energy. While our results show little sensitivity to the presence or absence of AGN feedback, increasing/decreasing the efficiency of stellar feedback by a factor of 2 substantially decreases/increases the CDDFs and the cosmic densities of the metal ions. We show that the impact of the efficiency of stellar feedback on the CDDFs and cosmic densities is largely due to its effect on the metal production rate. However, the temperatures of the metal absorbers, particularly those of strong O VI, are directly sensitive to the strength of the feedback.

  4. Cosmic ray knee and new physics at the TeV scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barceló, Roberto; Masip, Manuel; Mastromatteo, Iacopo, E-mail: rbarcelo@ugr.es, E-mail: masip@ugr.es, E-mail: mastroma@sissa.it

    2009-06-01

    We analyze the possibility that the cosmic ray knee appears at an energy threshold where the proton-dark matter cross section becomes large due to new TeV physics. It has been shown that such interactions could break the proton and produce a diffuse gamma ray flux consistent with MILAGRO observations. We argue that this hypothesis implies knees that scale with the atomic mass for the different nuclei, as KASKADE data seem to indicate. We find that to explain the change in the spectral index in the flux from E{sup −2.7} to E{sup −3.1} the cross section must grow like E{sup 0.4+β}more » above the knee, where β = 0.3–0.6 parametrizes the energy dependence of the age (τ∝E{sup −β}) of the cosmic rays reaching the Earth. The hypothesis also requires mbarn cross sections (that could be modelled with TeV gravity) and large densities of dark matter (that could be clumped around the sources of cosmic rays). We argue that neutrinos would also exhibit a threshold at E = (m{sub χ}/m{sub p}) E{sub knee} ≈ 10{sup 8} GeV where their interaction with a nucleon becomes strong. Therefore, the observation at ICECUBE or ANITA of standard neutrino events above this threshold would disprove the scenario.« less

  5. Is the Universe transparent?

    NASA Astrophysics Data System (ADS)

    Liao, Kai; Avgoustidis, A.; Li, Zhengxiang

    2015-12-01

    We present our study on cosmic opacity, which relates to changes in photon number as photons travel from the source to the observer. Cosmic opacity may be caused by absorption or scattering due to matter in the Universe, or by extragalactic magnetic fields that can turn photons into unobserved particles (e.g., light axions, chameleons, gravitons, Kaluza-Klein modes), and it is crucial to correctly interpret astronomical photometric measurements like type Ia supernovae observations. On the other hand, the expansion rate at different epochs, i.e., the observational Hubble parameter data H (z ), are obtained from differential ageing of passively evolving galaxies or from baryon acoustic oscillations and thus are not affected by cosmic opacity. In this work, we first construct opacity-free luminosity distances from H (z ) determinations, taking into consideration correlations between different redshifts for our error analysis. Moreover, we let the light-curve fitting parameters, accounting for distance estimation in type Ia supernovae observations, free to ensure that our analysis is authentically cosmological-model independent and gives a robust result. Any nonzero residuals between these two kinds of luminosity distances can be deemed as an indication of the existence of cosmic opacity. While a transparent Universe is currently consistent with the data, our results show that strong constraints on opacity (and consequently on physical mechanisms that could cause it) can be obtained in a cosmological-model-independent fashion.

  6. Possible interaction between baryons and dark-matter particles revealed by the first stars

    NASA Astrophysics Data System (ADS)

    Barkana, Rennan

    2018-03-01

    The cosmic radio-frequency spectrum is expected to show a strong absorption signal corresponding to the 21-centimetre-wavelength transition of atomic hydrogen around redshift 20, which arises from Lyman-α radiation from some of the earliest stars. By observing this 21-centimetre signal—either its sky-averaged spectrum or maps of its fluctuations, obtained using radio interferometers—we can obtain information about cosmic dawn, the era when the first astrophysical sources of light were formed. The recent detection of the global 21-centimetre spectrum reveals a stronger absorption than the maximum predicted by existing models, at a confidence level of 3.8 standard deviations. Here we report that this absorption can be explained by the combination of radiation from the first stars and excess cooling of the cosmic gas induced by its interaction with dark matter. Our analysis indicates that the spatial fluctuations of the 21-centimetre signal at cosmic dawn could be an order of magnitude larger than previously expected and that the dark-matter particle is no heavier than several proton masses, well below the commonly predicted mass of weakly interacting massive particles. Our analysis also confirms that dark matter is highly non-relativistic and at least moderately cold, and primordial velocities predicted by models of warm dark matter are potentially detectable. These results indicate that 21-centimetre cosmology can be used as a dark-matter probe.

  7. Observation of the thunderstorm-related ground cosmic ray flux variations by ARGO-YBJ

    NASA Astrophysics Data System (ADS)

    Bartoli, B.; Bernardini, P.; Bi, X. J.; Cao, Z.; Catalanotti, S.; Chen, S. Z.; Chen, T. L.; Cui, S. W.; Dai, B. Z.; D'Amone, A.; Danzengluobu; De Mitri, I.; D'Ettorre Piazzoli, B.; Di Girolamo, T.; Di Sciascio, G.; Feng, C. F.; Feng, Zhaoyang; Feng, Zhenyong; Gao, W.; Gou, Q. B.; Guo, Y. Q.; He, H. H.; Hu, Haibing; Hu, Hongbo; Iacovacci, M.; Iuppa, R.; Jia, H. Y.; Labaciren; Li, H. J.; Liu, C.; Liu, J.; Liu, M. Y.; Lu, H.; Ma, L. L.; Ma, X. H.; Mancarella, G.; Mari, S. M.; Marsella, G.; Mastroianni, S.; Montini, P.; Ning, C. C.; Perrone, L.; Pistilli, P.; Salvini, P.; Santonico, R.; Shen, P. R.; Sheng, X. D.; Shi, F.; Surdo, A.; Tan, Y. H.; Vallania, P.; Vernetto, S.; Vigorito, C.; Wang, H.; Wu, C. Y.; Wu, H. R.; Xue, L.; Yang, Q. Y.; Yang, X. C.; Yao, Z. G.; Yuan, A. F.; Zha, M.; Zhang, H. M.; Zhang, L.; Zhang, X. Y.; Zhang, Y.; Zhao, J.; Zhaxiciren; Zhaxisangzhu; Zhou, X. X.; Zhu, F. R.; Zhu, Q. Q.; D'Alessandro, F.; ARGO-YBJ Collaboration

    2018-02-01

    A correlation between the secondary cosmic ray flux and the near-earth electric field intensity, measured during thunderstorms, has been found by analyzing the data of the ARGO-YBJ experiment, a full coverage air shower array located at the Yangbajing Cosmic Ray Laboratory (4300 m a. s. l., Tibet, China). The counting rates of showers with different particle multiplicities (m =1 , 2, 3, and ≥4 ) have been found to be strongly dependent upon the intensity and polarity of the electric field measured during the course of 15 thunderstorms. In negative electric fields (i.e., accelerating negative charges downwards), the counting rates increase with increasing electric field strength. In positive fields, the rates decrease with field intensity until a certain value of the field EFmin (whose value depends on the event multiplicity), above which the rates begin increasing. By using Monte Carlo simulations, we found that this peculiar behavior can be well described by the presence of an electric field in a layer of thickness of a few hundred meters in the atmosphere above the detector, which accelerates/decelerates the secondary shower particles of opposite charge, modifying the number of particles with energy exceeding the detector threshold. These results, for the first time to our knowledge, give a consistent explanation for the origin of the variation of the electron/positron flux observed for decades by high altitude cosmic ray detectors during thunderstorms.

  8. KASCADE-Grande measurements of energy spectra for elemental groups of cosmic rays

    NASA Astrophysics Data System (ADS)

    Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.

    2013-07-01

    The KASCADE-Grande air shower experiment [1] consists of, among others, a large scintillator array for measurements of charged particles, N, and of an array of shielded scintillation counters used for muon counting, Nμ. KASCADE-Grande is optimized for cosmic ray measurements in the energy range 10 PeV to about 2000 PeV, where exploring the composition is of fundamental importance for understanding the transition from galactic to extragalactic origin of cosmic rays. Following earlier studies of the all-particle and the elemental spectra reconstructed in the knee energy range from KASCADE data [2], we have now extended these measurements to beyond 200 PeV. By analysing the two-dimensional shower size spectrum N vs. Nμ for nearly vertical events, we reconstruct the energy spectra of different mass groups by means of unfolding methods over an energy range where the detector is fully efficient. The procedure and its results, which are derived based on the hadronic interaction model QGSJET-II-02 and which yield a strong indication for a dominance of heavy mass groups in the covered energy range and for a knee-like structure in the iron spectrum at around 80 PeV, are presented. This confirms and further refines the results obtained by other analyses of KASCADE-Grande data, which already gave evidence for a knee-like structure in the heavy component of cosmic rays at about 80 PeV [3].

  9. Possible interaction between baryons and dark-matter particles revealed by the first stars.

    PubMed

    Barkana, Rennan

    2018-02-28

    The cosmic radio-frequency spectrum is expected to show a strong absorption signal corresponding to the 21-centimetre-wavelength transition of atomic hydrogen around redshift 20, which arises from Lyman-α radiation from some of the earliest stars. By observing this 21-centimetre signal-either its sky-averaged spectrum or maps of its fluctuations, obtained using radio interferometers-we can obtain information about cosmic dawn, the era when the first astrophysical sources of light were formed. The recent detection of the global 21-centimetre spectrum reveals a stronger absorption than the maximum predicted by existing models, at a confidence level of 3.8 standard deviations. Here we report that this absorption can be explained by the combination of radiation from the first stars and excess cooling of the cosmic gas induced by its interaction with dark matter. Our analysis indicates that the spatial fluctuations of the 21-centimetre signal at cosmic dawn could be an order of magnitude larger than previously expected and that the dark-matter particle is no heavier than several proton masses, well below the commonly predicted mass of weakly interacting massive particles. Our analysis also confirms that dark matter is highly non-relativistic and at least moderately cold, and primordial velocities predicted by models of warm dark matter are potentially detectable. These results indicate that 21-centimetre cosmology can be used as a dark-matter probe.

  10. CONSTRAINING THE EMISSIVITY OF ULTRAHIGH ENERGY COSMIC RAYS IN THE DISTANT UNIVERSE WITH THE DIFFUSE GAMMA-RAY EMISSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Xiangyu; Liu Ruoyu; Aharonian, Felix

    Ultrahigh cosmic rays (UHECRs) with energies {approx}> 10{sup 19} eV emitted at cosmological distances will be attenuated by cosmic microwave and infrared background radiation through photohadronic processes. Lower energy extragalactic cosmic rays ({approx}10{sup 18}-10{sup 19} eV) can only travel a linear distance smaller than {approx}Gpc in a Hubble time due to the diffusion if the extragalactic magnetic fields are as strong as nano-Gauss. These prevent us from directly observing most of the UHECRs in the universe, and thus the observed UHECR intensity reflects only the emissivity in the nearby universe within hundreds of Mpc. However, UHECRs in the distant universe,more » through interactions with the cosmic background photons, produce UHE electrons and gamma rays that in turn initiate electromagnetic cascades on cosmic background photons. This secondary cascade radiation forms part of the extragalactic diffuse GeV-TeV gamma-ray radiation and, unlike the original UHECRs, is observable. Motivated by new measurements of extragalactic diffuse gamma-ray background radiation by Fermi/Large Area Telescope, we obtained upper limit placed on the UHECR emissivity in the distant universe by requiring that the cascade radiation they produce not exceed the observed levels. By comparison with the gamma-ray emissivity of candidate UHECR sources (such as gamma-ray bursts (GRBs) and active galactic nuclei) at high redshifts, we find that the obtained upper limit for a flat proton spectrum is {approx_equal} 10{sup 1.5} times larger than the gamma-ray emissivity in GRBs and {approx_equal} 10 times smaller than the gamma-ray emissivity in BL Lac objects. In the case of iron nuclei composition, the derived upper limit of UHECR emissivity is a factor of 3-5 times higher. Robust upper limit on the cosmogenic neutrino flux is further obtained, which is marginally reachable by the Icecube detector and the next-generation detector JEM-EUSO.« less

  11. Cosmic Rays at Earth

    NASA Astrophysics Data System (ADS)

    Grieder, P. K. F.

    In 1912 Victor Franz Hess made the revolutionary discovery that ionizing radiation is incident upon the Earth from outer space. He showed with ground-based and balloon-borne detectors that the intensity of the radiation did not change significantly between day and night. Consequently, the sun could not be regarded as the sources of this radiation and the question of its origin remained unanswered. Today, almost one hundred years later the question of the origin of the cosmic radiation still remains a mystery. Hess' discovery has given an enormous impetus to large areas of science, in particular to physics, and has played a major role in the formation of our current understanding of universal evolution. For example, the development of new fields of research such as elementary particle physics, modern astrophysics and cosmology are direct consequences of this discovery. Over the years the field of cosmic ray research has evolved in various directions: Firstly, the field of particle physics that was initiated by the discovery of many so-called elementary particles in the cosmic radiation. There is a strong trend from the accelerator physics community to reenter the field of cosmic ray physics, now under the name of astroparticle physics. Secondly, an important branch of cosmic ray physics that has rapidly evolved in conjunction with space exploration concerns the low energy portion of the cosmic ray spectrum. Thirdly, the branch of research that is concerned with the origin, acceleration and propagation of the cosmic radiation represents a great challenge for astrophysics, astronomy and cosmology. Presently very popular fields of research have rapidly evolved, such as high-energy gamma ray and neutrino astronomy. In addition, high-energy neutrino astronomy may soon initiate as a likely spin-off neutrino tomography of the Earth and thus open a unique new branch of geophysical research of the interior of the Earth. Finally, of considerable interest are the biological and medical aspects of the cosmic radiation because of it ionizing character and the inevitable irradiation to which we are exposed. This book is a reference manual for researchers and students of cosmic ray physics and associated fields and phenomena. It is not intended to be a tutorial. However, the book contains an adequate amount of background materials that its content should be useful to a broad community of scientists and professionals. The present book contains chiefly a data collection in compact form that covers the cosmic radiation in the vicinity of the Earth, in the Earth's atmosphere, at sea level and underground. Included are predominantly experimental but also theoretical data. In addition the book contains related data, definitions and important relations. The aim of this book is to offer the reader in a single volume a readily available comprehensive set of data that will save him the need of frequent time consuming literature searches.

  12. The Galactic Center observed with H.E.S.S.

    NASA Astrophysics Data System (ADS)

    Jouvin, Lea

    2017-08-01

    The Galactic Center region has been a prime target region for the H.E.S.S. Imaging Atmospheric Cherenkov Telescope Array observations since da ta taking started in 2003. H.E.S.S. has revealed the presence of a very high energy gamma-ray diffuse emission in the central 200 pc, in addition to the detection of a point like source coincident with the supermassive black hole SgrA*. With more than 250 hours of H.E.S.S. data and the continuous improvement of the analysis techniques, a detailed morphology and spectral analysis of the region is now possible. We will report on the new characterisation of the spectrum of the central source down to 100 GeV energies taking advantage of the H.E.S.S. II data, obtained after the inclusion of the large 28-meter CT5 telescope in the array centre. We will present the recent discovery of a powerful cosmic PeVatron accelerator at the center of our Galaxy as well as a new characterization of the diffuse gamma-ray emission in the central 200 pc of our Galaxy through a detailed morphology study. By analysing the nature of the various components of this emission, the existence of a strong cosmic-ray gradient and thus the presence of a strong cosmic-ray accelerator at the very centre of our Galaxy was found. We will also report on the discovery of an additional point-like source HESS J1746-285 in this region possibly associated with the pulsar wind nebula candidate G0.13-0.11.

  13. Copula based flexible modeling of associations between clustered event times.

    PubMed

    Geerdens, Candida; Claeskens, Gerda; Janssen, Paul

    2016-07-01

    Multivariate survival data are characterized by the presence of correlation between event times within the same cluster. First, we build multi-dimensional copulas with flexible and possibly symmetric dependence structures for such data. In particular, clustered right-censored survival data are modeled using mixtures of max-infinitely divisible bivariate copulas. Second, these copulas are fit by a likelihood approach where the vast amount of copula derivatives present in the likelihood is approximated by finite differences. Third, we formulate conditions for clustered right-censored survival data under which an information criterion for model selection is either weakly consistent or consistent. Several of the familiar selection criteria are included. A set of four-dimensional data on time-to-mastitis is used to demonstrate the developed methodology.

  14. The Topp-Leone generalized Rayleigh cure rate model and its application

    NASA Astrophysics Data System (ADS)

    Nanthaprut, Pimwarat; Bodhisuwan, Winai; Patummasut, Mena

    2017-11-01

    Cure rate model is one of the survival analysis when model consider a proportion of the censored data. In clinical trials, the data represent time to recurrence of event or death of patients are used to improve the efficiency of treatments. Each dataset can be separated into two groups: censored and uncensored data. In this work, the new mixture cure rate model is introduced based on the Topp-Leone generalized Rayleigh distribution. The Bayesian approach is employed to estimate its parameters. In addition, a breast cancer dataset is analyzed for model illustration purpose. According to the deviance information criterion, the Topp-Leone generalized Rayleigh cure rate model shows better result than the Weibull and exponential cure rate models.

  15. Permutational distribution of the log-rank statistic under random censorship with applications to carcinogenicity assays.

    PubMed

    Heimann, G; Neuhaus, G

    1998-03-01

    In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.

  16. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    NASA Astrophysics Data System (ADS)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  17. A search for anisotropy in the arrival directions of ultra high energy cosmic rays recorded at the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antici'c, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Badescu, A. M.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Bohácová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caccianiga, B.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chirinos Diaz, J.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; del Peral, L.; del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Díaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Fajardo Tapia, I.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fick, B.; Filevich, A.; Filipcic, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gouffon, P.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Guzman, A.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Horvath, P.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lauer, R.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mi'canovi'c, S.; Micheletti, M. I.; Minaya, I. A.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niechciol, M.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Porcelli, A.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Saftoiu, A.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Sima, O.; 'Smiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tapia, A.; Tartare, M.; Taşcău, O.; Tavera Ruiz, C. G.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Todero Peixoto, C. J.; Toma, G.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberic, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Westerhoff, S.; Whelan, B. J.; Widom, A.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.

    2012-04-01

    Observations of cosmic ray arrival directions made with the Pierre Auger Observatory have previously provided evidence of anisotropy at the 99% CL using the correlation of ultra high energy cosmic rays (UHECRs) with objects drawn from the Véron-Cetty Véron catalog. In this paper we report on the use of three catalog independent methods to search for anisotropy. The 2pt-L, 2pt+ and 3pt methods, each giving a different measure of self-clustering in arrival directions, were tested on mock cosmic ray data sets to study the impacts of sample size and magnetic smearing on their results, accounting for both angular and energy resolutions. If the sources of UHECRs follow the same large scale structure as ordinary galaxies in the local Universe and if UHECRs are deflected no more than a few degrees, a study of mock maps suggests that these three methods can efficiently respond to the resulting anisotropy with a P-value = 1.0% or smaller with data sets as few as 100 events. Using data taken from January 1, 2004 to July 31, 2010 we examined the 20,30,...,110 highest energy events with a corresponding minimum energy threshold of about 49.3 EeV. The minimum P-values found were 13.5% using the 2pt-L method, 1.0% using the 2pt+ method and 1.1% using the 3pt method for the highest 100 energy events. In view of the multiple (correlated) scans performed on the data set, these catalog-independent methods do not yield strong evidence of anisotropy in the highest energy cosmic rays.

  18. Recent Ultra High Energy neutrino bounds and multimessenger observations with the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Zas, Enrique

    2018-01-01

    The overall picture of the highest energy particles produced in the Universe is changing because of measurements made with the Pierre Auger Observatory. Composition studies of cosmic rays point towards an unexpected mixed composition of intermediate mass nuclei, more isotropic than anticipated, which is reshaping the future of the field and underlining the priority to understand composition at the highest energies. The Observatory is competitive in the search for neutrinos of all flavors above about 100 PeV by looking for very inclined showers produced deep in the atmosphere by neutrinos interacting either in the atmosphere or in the Earth's crust. It covers a large field of view between -85° and 60° declination in equatorial coordinates. Neutrinos are expected because of the existence of ultra high energy cosmic rays. They provide valuable complementary information, their fluxes being sensitive to the primary cosmic ray masses and their directions reflecting the source positions. We report the results of the neutrino search providing competitive bounds to neutrino production and strong constraints to a number of production models including cosmogenic neutrinos due to ultra high energy protons. We also report on two recent contributions of the Observatory to multimessenger studies by searching for correlations of neutrinos both with cosmic rays and with gravitational waves. The correlations of the directions of the highest energy astrophysical neutrinos discovered with IceCube with the highest energy cosmic rays detected with the Auger Observatory and the Telescope Array revealed an excess that is not statistically significant and is being monitored. The targeted search for neutrinos correlated with the discovery of the gravitational wave events GW150914 and GW151226 with advanced LIGO has led to the first bounds on the energy emitted by black hole mergers in Ultra-High Energy Neutrinos.

  19. Exploring the making of a galactic wind in the starbursting dwarf irregular galaxy IC 10 with LOFAR

    NASA Astrophysics Data System (ADS)

    Heesen, V.; Rafferty, D. A.; Horneffer, A.; Beck, R.; Basu, A.; Westcott, J.; Hindson, L.; Brinks, E.; ChyŻy, K. T.; Scaife, A. M. M.; Brüggen, M.; Heald, G.; Fletcher, A.; Horellou, C.; Tabatabaei, F. S.; Paladino, R.; Nikiel-Wroczyński, B.; Hoeft, M.; Dettmar, R.-J.

    2018-05-01

    Low-mass galaxies are subject to strong galactic outflows, in which cosmic rays may play an important role; they can be best traced with low-frequency radio continuum observations, which are less affected by spectral ageing. We present a study of the nearby starburst dwarf irregular galaxy IC 10 using observations at 140 MHz with the Low-Frequency Array (LOFAR), at 1580 MHz with the Very Large Array (VLA), and at 6200 MHz with the VLA and the 100-m Effelsberg telescope. We find that IC 10 has a low-frequency radio halo, which manifests itself as a second component (thick disc) in the minor axis profiles of the non-thermal radio continuum emission at 140 and 1580 MHz. These profiles are then fitted with 1D cosmic ray transport models for pure diffusion and advection. We find that a diffusion model fits best, with a diffusion coefficient of D = (0.4-0.8) × 1026(E/GeV)0.5 cm2 s-1, which is at least an order of magnitude smaller than estimates both from anisotropic diffusion and the diffusion length. In contrast, advection models, which cannot be ruled out due to the mild inclination, while providing poorer fits, result in advection speeds close to the escape velocity of ≈ 50 km s- 1, as expected for a cosmic ray-driven wind. Our favoured model with an accelerating wind provides a self-consistent solution, where the magnetic field is in energy equipartition with both the warm neutral and warm ionized medium with an important contribution from cosmic rays. Consequently, cosmic rays can play a vital role for the launching of galactic winds in the disc-halo interface.

  20. A search for anisotropy in the arrival directions of ultra high energy cosmic rays recorded at the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abreu, P.

    2012-01-01

    Observations of cosmic ray arrival directions made with the Pierre Auger Observatory have previously provided evidence of anisotropy at the 99% CL using the correlation of ultra high energy cosmic rays (UHECRs) with objects drawn from the Veron-Cetty Veron catalog. In this paper we report on the use of three catalog independent methods to search for anisotropy. The 2pt-L, 2pt+ and 3pt methods, each giving a different measure of self-clustering in arrival directions, were tested on mock cosmic ray data sets to study the impacts of sample size and magnetic smearing on their results, accounting for both angular and energymore » resolutions. If the sources of UHECRs follow the same large scale structure as ordinary galaxies in the local Universe and if UHECRs are deflected no more than a few degrees, a study of mock maps suggests that these three methods can efficiently respond to the resulting anisotropy with a P-value = 1.0% or smaller with data sets as few as 100 events. Using data taken from January 1, 2004 to July 31, 2010 we examined the 20, 30, ..., 110 highest energy events with a corresponding minimum energy threshold of about 51 EeV. The minimum P-values found were 13.5% using the 2pt-L method, 1.0% using the 2pt+ method and 1.1% using the 3pt method for the highest 100 energy events. In view of the multiple (correlated) scans performed on the data set, these catalog-independent methods do not yield strong evidence of anisotropy in the highest energy cosmic rays.« less

  1. Hazard Function Estimation with Cause-of-Death Data Missing at Random

    PubMed Central

    Wang, Qihua; Dinse, Gregg E.; Liu, Chunling

    2010-01-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874

  2. Detecting and describing preventive intervention effects in a universal school-based randomized trial targeting delinquent and violent behavior.

    PubMed

    Stoolmiller, M; Eddy, J M; Reid, J B

    2000-04-01

    This study examined theoretical, methodological, and statistical problems involved in evaluating the outcome of aggression on the playground for a universal preventive intervention for conduct disorder. Moderately aggressive children were hypothesized most likely to benefit. Aggression was measured on the playground using observers blind to the group status of the children. Behavior was micro-coded in real time to minimize potential expectancy biases. The effectiveness of the intervention was strongly related to initial levels of aggressiveness. The most aggressive children improved the most. Models that incorporated corrections for low reliability (the ratio of variance due to true time-stable individual differences to total variance) and censoring (a floor effect in the rate data due to short periods of observation) obtained effect sizes 5 times larger than models without such corrections with respect to children who were initially 2 SDs above the mean on aggressiveness.

  3. Estimation of a monotone percentile residual life function under random censorship.

    PubMed

    Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo

    2013-01-01

    In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A possible signature of annihilating dark matter

    NASA Astrophysics Data System (ADS)

    Chan, Man Ho

    2018-02-01

    In this article, we report a new signature of dark matter annihilation based on the radio continuum data of NGC 1569 galaxy detected in the past few decades. After eliminating the thermal contribution of the radio signal, an abrupt change in the spectral index is shown in the radio spectrum. Previously, this signature was interpreted as an evidence of convective outflow of cosmic ray. However, we show that the cosmic ray contribution is not enough to account for the observed radio flux. We then discover that if dark matter annihilates via the 4-e channel with the thermal relic cross-section, the electrons and positrons produced would emit a strong radio flux which can provide an excellent agreement with the observed signature. The best-fitting dark matter mass is 25 GeV.

  5. FIRST ULTRAVIOLET REFLECTANCE SPECTRA OF PLUTO AND CHARON BY THE HUBBLE SPACE TELESCOPE COSMIC ORIGINS SPECTROGRAPH: DETECTION OF ABSORPTION FEATURES AND EVIDENCE FOR TEMPORAL CHANGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stern, S. A.; Spencer, J. R.; Shinn, A.

    We have observed the mid-UV spectra of both Pluto and its large satellite, Charon, at two rotational epochs using the Hubble Space Telescope (HST) Cosmic Origins Spectrograph (COS) in 2010. These are the first HST/COS measurements of Pluto and Charon. Here we describe the observations and our reduction of them, and present the albedo spectra, average mid-UV albedos, and albedo slopes we derive from these data. These data reveal evidence for a strong absorption feature in the mid-UV spectrum of Pluto; evidence for temporal change in Pluto's spectrum since the 1990s is reported, and indirect evidence for a near-UV spectralmore » absorption on Charon is also reported.« less

  6. A realistic treatment of geomagnetic Cherenkov radiation from cosmic ray air showers

    NASA Astrophysics Data System (ADS)

    Werner, Klaus; de Vries, Krijn D.; Scholten, Olaf

    2012-09-01

    We present a macroscopic calculation of coherent electro-magnetic radiation from air showers initiated by ultra-high energy cosmic rays, based on currents obtained from three-dimensional Monte Carlo simulations of air showers in a realistic geo-magnetic field. We discuss the importance of a correct treatment of the index of refraction in air, given by the law of Gladstone and Dale, which affects the pulses enormously for certain configurations, compared to a simplified treatment using a constant index. We predict in particular a geomagnetic Cherenkov radiation, which provides strong signals at high frequencies (GHz), for certain geometries together with "normal radiation" from the shower maximum, leading to a double peak structure in the frequency spectrum. We also provide some information about the numerical procedures referred to as EVA 1.0.

  7. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  8. Comparing two correlated C indices with right-censored survival outcome: a one-shot nonparametric approach

    PubMed Central

    Kang, Le; Chen, Weijie; Petrick, Nicholas A.; Gallas, Brandon D.

    2014-01-01

    The area under the receiver operating characteristic (ROC) curve (AUC) is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of AUC, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. PMID:25399736

  9. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    PubMed

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  10. On prognostic models, artificial intelligence and censored observations.

    PubMed

    Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A

    2001-03-01

    The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.

  11. Likelihood inference for COM-Poisson cure rate model with interval-censored data and Weibull lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, N

    2017-10-01

    In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.

  12. Rethinking the advantage of zero-HLA mismatches in unrelated living donor kidney transplantation: implications on kidney paired donation.

    PubMed

    Casey, Michael Jin; Wen, Xuerong; Rehman, Shehzad; Santos, Alfonso H; Andreoni, Kenneth A

    2015-04-01

    The OPTN/UNOS Kidney Paired Donation (KPD) Pilot Program allocates priority to zero-HLA mismatches. However, in unrelated living donor kidney transplants (LDKT)-the same donor source in KPD-no study has shown whether zero-HLA mismatches provide any advantage over >0 HLA mismatches. We hypothesize that zero-HLA mismatches among unrelated LDKT do not benefit graft survival. This retrospective SRTR database study analyzed LDKT recipients from 1987 to 2012. Among unrelated LDKT, subjects with zero-HLA mismatches were compared to a 1:1-5 matched (by donor age ±1 year and year of transplantation) control cohort with >0 HLA mismatches. The primary endpoint was death-censored graft survival. Among 32,654 unrelated LDKT recipients, 83 had zero-HLA mismatches and were matched to 407 controls with >0 HLA mismatches. Kaplan-Meier analyses for death-censored graft and patient survival showed no difference between study and control cohorts. In multivariate marginal Cox models, zero-HLA mismatches saw no benefit with death-censored graft survival (HR = 1.46, 95% CI 0.78-2.73) or patient survival (HR = 1.43, 95% CI 0.68-3.01). Our data suggest that in unrelated LDKT, zero-HLA mismatches may not offer any survival advantage. Therefore, particular study of zero-HLA mismatching is needed to validate its place in the OPTN/UNOS KPD Pilot Program allocation algorithm. © 2014 Steunstichting ESOT.

  13. Estimating the effect of a rare time-dependent treatment on the recurrent event rate.

    PubMed

    Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E

    2018-05-30

    In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Random left censoring: a second look at bone lead concentration measurements

    NASA Astrophysics Data System (ADS)

    Popovic, M.; Nie, H.; Chettle, D. R.; McNeill, F. E.

    2007-09-01

    Bone lead concentrations measured in vivo by x-ray fluorescence (XRF) are subjected to left censoring due to limited precision of the technique at very low concentrations. In the analysis of bone lead measurements, inverse variance weighting (IVW) of measurements is commonly used to estimate the mean of a data set and its standard error. Student's t-test is used to compare the IVW means of two sets, testing the hypothesis that the two sets are from the same population. This analysis was undertaken to assess the adequacy of IVW in the analysis of bone lead measurements or to confirm the results of IVW using an independent approach. The rationale is provided for the use of methods of survival data analysis in the study of XRF bone lead measurements. The procedure is provided for bone lead data analysis using the Kaplan-Meier and Nelson-Aalen estimators. The methodology is also outlined for the rank tests that are used to determine whether two censored sets are from the same population. The methods are applied on six data sets acquired in epidemiological studies. The estimated parameters and test statistics were compared with the results of the IVW approach. It is concluded that the proposed methods of statistical analysis can provide valid inference about bone lead concentrations, but the computed parameters do not differ substantially from those derived by the more widely used method of IVW.

  15. Cosmic ray modulation with a Fisk-type heliospheric magnetic field and a latitude-dependent solar wind speed

    NASA Astrophysics Data System (ADS)

    Hitge, M.; Burger, R. A.

    2010-01-01

    The effect of a latitude-dependent solar wind speed on a Fisk heliospheric magnetic field [Fisk, L. A. Motion of the footpoints of heliospheric magnetic field lines at the Sun: implications for recurrent energetic particle events at high heliographic latitudes. J. Geophys. Res. 101, 15547-15553, 1996] was first discussed by Schwadron and Schwadron and McComas [Schwadron, N.A. An explanation for strongly underwound magnetic field in co-rotating rarefaction regions and its relationship to footpoint motion on the the sun. Geophys. Res. Lett. 29, 1-8, 2002. and Schwadron, N.A., McComas, D.J. Heliospheric “FALTS”: favored acceleration locations at the termination shock. Geophys. Res. Lett. 30, 41-1, 2003]. Burger and Sello [Burger, R.A., Sello, P.C. The effect on cosmic ray modulation of a Parker field modified by a latitudinal-dependent solar wind speed. Adv. Space Res. 35, 643-646, 2005] found a significant effect for a simplified 2D version of a latitude-dependent Fisk-type field while Miyake and Yanagita [Miyake, S., Yanagita, S. The effect of a modified Parker field on the modulation of the galactic cosmic rays. In: Proceedings of 30th International Cosmic Ray Conference. Merida, Mexico, vol. 1, 445-448, 2007] found a smaller effect. The current report improves on a previous attempt Hitge and Burger [Hitge, M., Burger, R.A. The effect of a latitude-dependent solar wind speed on cosmic-ray modulation in a Fisk-type heliospheric magnetic field. In: Proceedings of 30th International Cosmic Ray Conference. Merida, Mexico, vol. 1, pp. 449-450, 2007] where the global change in the solar wind speed and not the local speed gradient was emphasized. The sheared Fisk field of Schwadron and McComas [Schwadron, N.A., McComas, D.J. Heliospheric “FALTS”: Favored acceleration locations at the termination shock. Geophys. Res. Lett. 30, 41-1, 2003.) is similar to the current Schwadron-Parker hybrid field. Little difference is found between the effects of a Parker field and a Schwadron-Parker hybrid field on cosmic-ray modulation, in contrast to the results of Burger and Sello and Miyake and Yanagita [Burger, R.A., Sello, P.C. The effect on cosmic ray modulation of a Parker field modified by a latitudinal-dependent solar wind speed. Adv. Space Res. 35, 643-646, 2005 and Miyake, S., Yanagita, S. The effect of a modified Parker field on the modulation of the galactic cosmic rays. In: Proceedings of 30th International Cosmic Ray Conference. Merida, Mexico, vol. 1, pp. 445-448, 2007]. The two-dimensional approximation used by these authors is therefore inadequate to model the complexities of the actual three-dimensional field. We also show that a Fisk-type field with a latitude-dependent solar wind speed (Schwadron-Parker hybrid field) decreases both the relative amplitude of recurrent cosmic ray intensity variations and latitude gradients and yields similar constants of proportionality for these quantities as for the constant solar wind speed case.

  16. Evaluation of methodology for the analysis of 'time-to-event' data in pharmacogenomic genome-wide association studies.

    PubMed

    Syed, Hamzah; Jorgensen, Andrea L; Morris, Andrew P

    2016-06-01

    To evaluate the power to detect associations between SNPs and time-to-event outcomes across a range of pharmacogenomic study designs while comparing alternative regression approaches. Simulations were conducted to compare Cox proportional hazards modeling accounting for censoring and logistic regression modeling of a dichotomized outcome at the end of the study. The Cox proportional hazards model was demonstrated to be more powerful than the logistic regression analysis. The difference in power between the approaches was highly dependent on the rate of censoring. Initial evaluation of single-nucleotide polymorphism association signals using computationally efficient software with dichotomized outcomes provides an effective screening tool for some design scenarios, and thus has important implications for the development of analytical protocols in pharmacogenomic studies.

  17. Global Lightning Response to Forbush Decreases in Short-term

    NASA Astrophysics Data System (ADS)

    Li, H.; Wu, Q.; Wang, C.

    2017-12-01

    During the past three decades, particular scientific attention has been drawn to the potential link between solar activities and global climate change. How the sun modulates the climate has always been controversial. There are three relatively widely accepted mechanisms illustrating this process: the total solar irradiance (TSI), the solar ultraviolet radiation (SUR), and the space weather mechanisms. As for space weather mechanism, the sun influences the microphysical process in cloud by modulating the cosmic ray flux and thus changes the cloud cover, which finally affects the earth's radiation balance. Unfortunately, the lack of related observations and some opposite research results make this mechanism rather debatable. In order to provide possible evidence for space weather mechanism, we study the influence of Forbush decreases (FDs) of galactic cosmic ray on global lightning activities, which to some extent represents the basic process of cosmic ray-atmospheric coupling. We use the daily lightning counts from 1998 to 2014 observed by LIS sensor aboard the TRMM satellite. Considering the "diurnal distribution" (occurring more in the afternoon than in the morning) and the "seasonal distribution" (occurring more in summer than in winter) of lightning activities as well as the 49-day precession of TRMM satellite, the daily lightning counts show an intricate periodic fluctuation. We propose a 3-step approach - latitude zone limitation, orbit branch selection and local time normalization - to eliminate it. As for FDs, we select them by checking the hourly neutron counts variation of each month of 17 years obtained from the Oulu Cosmic Ray Station. During the selection, we choose the FDs which are "strong" (decrease more than 6%) and "standard" (strongly decrease in a few hours to one day and gradually recover in about one week) to diminish the meteorological influence and other possible disturbance. For both case study and temporal superposition of several cases, the results illustrate that there is a statistically significant positive correlation between FD and daily lightning count, and the latter reaches its minimum 2-3 days after the former onset. In addition, this response enhances if we only choose the stronger and the more standard FDs. This work has reached the 95% confidence level of Monte Carlo test.

  18. Cosmic variance of the galaxy cluster weak lensing signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruen, D.; Seitz, S.; Becker, M. R.

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less

  19. Dark Matter Equation of State through Cosmic History

    NASA Astrophysics Data System (ADS)

    Kopp, Michael; Skordis, Constantinos; Thomas, Daniel B.; Ilić, Stéphane

    2018-06-01

    Cold dark matter is a crucial constituent of the current concordance cosmological model. Having a vanishing equation of state (EOS), its energy density scales with the inverse cosmic volume and is thus uniquely described by a single number, its present abundance. We test the inverse cosmic volume law for dark matter (DM) by allowing its EOS to vary independently in eight redshift bins in the range z =105 and z =0 . We use the latest measurements of the cosmic microwave background radiation from the Planck satellite and supplement them with baryon acoustic oscillation (BAO) data from the 6dF and SDSS-III BOSS surveys and with the Hubble Space Telescope (HST) key project data. We find no evidence for nonzero EOS in any of the eight redshift bins. With Planck data alone, the DM abundance is most strongly constrained around matter-radiation equality ωgeq=0.119 3-0.0035+0.0036 (95% C.L.), whereas its present-day value is more weakly constrained: ωg(0 )=0.1 6-0.10+0.12 (95% C.L.). Adding BAO or HST data does not significantly change the ωgeq constraint, while ωg(0 ) tightens to 0.16 0-0.065+0.069 (95% C.L.) and 0.12 4-0.067+0.081 (95% C.L.), respectively. Our results constrain for the first time the level of "coldness" required of the DM across various cosmological epochs and show that the DM abundance is strictly positive at all times.

  20. Dark Matter Equation of State through Cosmic History.

    PubMed

    Kopp, Michael; Skordis, Constantinos; Thomas, Daniel B; Ilić, Stéphane

    2018-06-01

    Cold dark matter is a crucial constituent of the current concordance cosmological model. Having a vanishing equation of state (EOS), its energy density scales with the inverse cosmic volume and is thus uniquely described by a single number, its present abundance. We test the inverse cosmic volume law for dark matter (DM) by allowing its EOS to vary independently in eight redshift bins in the range z=10^{5} and z=0. We use the latest measurements of the cosmic microwave background radiation from the Planck satellite and supplement them with baryon acoustic oscillation (BAO) data from the 6dF and SDSS-III BOSS surveys and with the Hubble Space Telescope (HST) key project data. We find no evidence for nonzero EOS in any of the eight redshift bins. With Planck data alone, the DM abundance is most strongly constrained around matter-radiation equality ω_{g}^{eq}=0.1193_{-0.0035}^{+0.0036} (95% C.L.), whereas its present-day value is more weakly constrained: ω_{g}^{(0)}=0.16_{-0.10}^{+0.12} (95% C.L.). Adding BAO or HST data does not significantly change the ω_{g}^{eq} constraint, while ω_{g}^{(0)} tightens to 0.160_{-0.065}^{+0.069} (95% C.L.) and 0.124_{-0.067}^{+0.081} (95% C.L.), respectively. Our results constrain for the first time the level of "coldness" required of the DM across various cosmological epochs and show that the DM abundance is strictly positive at all times.

  1. Cosmic variance of the galaxy cluster weak lensing signal

    DOE PAGES

    Gruen, D.; Seitz, S.; Becker, M. R.; ...

    2015-04-13

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less

  2. Terrestrial Planet Finder Interferometer (TPF-1) Whitepaper for the AAAC Exoplanet Task Force

    DTIC Science & Technology

    2007-04-02

    very strong indication of a biological release ( Lovelock 1980; Sagan et al. 1993). The three strongest bands in the Earth-analog spectrum, O3 band...A., Henry, C. A., et al. 2005, Proc. SPIE, 5905, 8 Lay, O. P. 2006, Proc. SPIE, 6268, 62681A Lovelock , J. E. 1980, Cosmic Search, 2, (4), 2 Martin

  3. Distance Probes of Dark Energy

    DOE PAGES

    Kim, A. G.; Padmanabhan, N.; Aldering, G.; ...

    2015-03-15

    We present the results from the Distances subgroup of the Cosmic Frontier Community Planning Study (Snowmass 2013). This document summarizes the current state of the field as well as future prospects and challenges. In addition to the established probes using Type Ia supernovae and baryon acoustic oscillations, we also consider prospective methods based on clusters, active galactic nuclei, gravitational wave sirens and strong lensing time delays.

  4. The role of the dark matter haloes on the cosmic star formation rate

    NASA Astrophysics Data System (ADS)

    Pereira, Eduardo S.; Miranda, Oswaldo D.

    2015-11-01

    The cosmic star formation rate (CSFR) represents the fraction of gas that is converted into stars within a certain comoving volume and at a given time t. However the evolution of the dark matter haloes and its relationship with the CSFR is not yet clear. In this context, we have investigated the role of the dark halo mass function - DHMF - in the process of gas conversion into stars. We observed a strong dependence between the fraction of baryons in structures, fb, and the specific mass function used for describing the dark matter haloes. In some cases, we have obtained fb greater than one at redshift z = 0 . This result indicates that the evolution of dark matter, described by the specific DHMF, could not trace the baryonic matter without a bias parameter. We also observed that the characteristic time-scale for star formation, τ, is strongly dependent on the considered DHMF, when the model is confronted against the observational data. Also, as part of this work it was released, under GNU general public license, a Python package called 'pycosmicstar' to study the CSFR and its relationship with the DHMF.

  5. The Origin Of Cosmic Rays And The Stars Of Berkeley 87

    NASA Astrophysics Data System (ADS)

    Turner, David G.; Majaess, D. J.; Lane, D. J.; Balam, D. D.

    2010-01-01

    Spectroscopic observations and the results of photometric monitoring are presented for members of the heavily-reddened, young, 1.2 kpc-distant, open cluster Berkeley 87, which is spatially coincident with the strongest source of cosmic rays in the northern sky. Many cluster members exhibit evidence for extreme loss of mass over their lifetimes: the M3 Ia supergiant BC Cyg has an evolutionary mass half that of stars at the main-sequence turnoff, the B2 Iabe emission-line supergiant HDE 229059 also has an evolutionary mass smaller than that of the main-sequence turnoff, the WO2 star WR 142, the only example of an oxygen sequence Wolf-Rayet star in an open cluster, displays evidence for variable, high velocity winds in its spectrum, the curious object V439 Cyg (B0: Vnne) appears to be an example of a recent binary merger, and Vatican Emission Star VES 203 (B0.5 Ve) displays a strong P Cygni signature in its Balmer line emission. It appears that heavy mass loss is a common factor associated with cluster stars. Could that be associated with the location of a cosmic ray production factory from the vicinity of Berkeley 87?

  6. Exploring the cosmic evolution of habitability with galaxy merger trees

    NASA Astrophysics Data System (ADS)

    Stanway, E. R.; Hoskin, M. J.; Lane, M. A.; Brown, G. C.; Childs, H. J. T.; Greis, S. M. L.; Levan, A. J.

    2018-04-01

    We combine inferred galaxy properties from a semi-analytic galaxy evolution model incorporating dark matter halo merger trees with new estimates of supernova and gamma-ray burst rates as a function of metallicity from stellar population synthesis models incorporating binary interactions. We use these to explore the stellar-mass fraction of galaxies irradiated by energetic astrophysical transients and its evolution over cosmic time, and thus the fraction which is potentially habitable by life like our own. We find that 18 per cent of the stellar mass in the Universe is likely to have been irradiated within the last 260 Myr, with GRBs dominating that fraction. We do not see a strong dependence of irradiated stellar-mass fraction on stellar mass or richness of the galaxy environment. We consider a representative merger tree as a Local Group analogue, and find that there are galaxies at all masses which have retained a high habitable fraction (>40 per cent) over the last 6 Gyr, but also that there are galaxies at all masses where the merger history and associated star formation have rendered galaxies effectively uninhabitable. This illustrates the need to consider detailed merger trees when evaluating the cosmic evolution of habitability.

  7. Characterization and Physical Explanation of Energetic Particles on Planck HFI Instrument

    NASA Astrophysics Data System (ADS)

    Catalano, A.; Ade, P.; Atik, Y.; Benoit, A.; Bréele, E.; Bock, J. J.; Camus, P.; Charra, M.; Crill, B. P.; Coron, N.; Coulais, A.; Désert, F.-X.; Fauvet, L.; Giraud-Héraud, Y.; Guillaudin, O.; Holmes, W.; Jones, W. C.; Lamarre, J.-M.; Macías-Pérez, J.; Martinez, M.; Miniussi, A.; Monfardini, A.; Pajot, F.; Patanchon, G.; Pelissier, A.; Piat, M.; Puget, J.-L.; Renault, C.; Rosset, C.; Santos, D.; Sauvé, A.; Spencer, L.; Sudiwala, R.

    2014-09-01

    The Planck High Frequency Instrument (HFI) has been surveying the sky continuously from the second Lagrangian point (L2) between August 2009 and January 2012. It operates with 52 high impedance bolometers cooled at 100 mK in a range of frequency between 100 GHz and 1 THz with unprecedented sensitivity, but strong coupling with cosmic radiation. At L2, the particle flux is about 5 and is dominated by protons incident on the spacecraft. Protons with an energy above 40 MeV can penetrate the focal plane unit box causing two different effects: glitches in the raw data from direct interaction of cosmic rays with detectors (producing a data loss of about 15 % at the end of the mission) and thermal drifts in the bolometer plate at 100 mK adding non-Gaussian noise at frequencies below 0.1 Hz. The HFI consortium has made strong efforts in order to correct for this effect on the time ordered data and final Planck maps. This work intends to give a view of the physical explanation of the glitches observed in the HFI instrument in-flight. To reach this goal, we performed several ground-based experiments using protons and particles to test the impact of particles on the HFI spare bolometers with a better control of the environmental conditions with respect to the in-flight data. We have shown that the dominant part of glitches observed in the data comes from the impact of cosmic rays in the silicon die frame supporting the micro-machined bolometric detectors propagating energy mainly by ballistic phonons and by thermal diffusion. The implications of these results for future satellite missions will be discussed.

  8. Exposure data from multi-application, multi-industry maintenance of surfaces and joints sealed with asbestos-containing gaskets and packing.

    PubMed

    Boelter, Fred; Simmons, Catherine; Hewett, Paul

    2011-04-01

    Fluid sealing devices (gaskets and packing) containing asbestos are manufactured and blended with binders such that the asbestos fibers are locked in a matrix that limits the potential for fiber release. Occasionally, fluid sealing devices fail and need to be replaced or are removed during preventive maintenance activities. This is the first study known to pool over a decade's worth of exposure assessments involving fluid sealing devices used in a variety of applications. Twenty-one assessments of work activities and air monitoring were performed under conditions with no mechanical ventilation and work scenarios described as "worst-case" conditions. Frequently, the work was conducted using aggressive techniques, along with dry removal practices. Personal and area samples were collected and analyzed in accordance with the National Institute for Occupational Safety and Health Methods 7400 and 7402. A total of 782 samples were analyzed by phase contrast microscopy, and 499 samples were analyzed by transmission electron microscopy. The statistical data analysis focused on the overall data sets which were personal full-shift time-weighted average (TWA) exposures, personal 30-min exposures, and area full-shift TWA values. Each data set contains three estimates of exposure: (1) total fibers; (2) asbestos fibers only but substituting a value of 0.0035 f/cc for censored data; and (3) asbestos fibers only but substituting the limit of quantification value for censored data. Censored data in the various data sets ranged from 7% to just over 95%. Because all the data sets were censored, the geometric mean and geometric standard deviation were estimated using the maximum likelihood estimation method. Nonparametric, Kaplan-Meier, and lognormal statistics were applied and found to be consistent and reinforcing. All three sets of statistics suggest that the mean and median exposures were less than 25% of 0.1 f/cc 8-hr TWA sample or 1.0 f/cc 30-min samples, and that there is at least 95% confidence that the true 95th percentile exposures are less than 0.1 f/cc as an 8-hr TWA.

  9. Methodological comparison of marginal structural model, time-varying Cox regression, and propensity score methods: the example of antidepressant use and the risk of hip fracture.

    PubMed

    Ali, M Sanni; Groenwold, Rolf H H; Belitser, Svetlana V; Souverein, Patrick C; Martín, Elisa; Gatto, Nicolle M; Huerta, Consuelo; Gardarsdottir, Helga; Roes, Kit C B; Hoes, Arno W; de Boer, Antonius; Klungel, Olaf H

    2016-03-01

    Observational studies including time-varying treatments are prone to confounding. We compared time-varying Cox regression analysis, propensity score (PS) methods, and marginal structural models (MSMs) in a study of antidepressant [selective serotonin reuptake inhibitors (SSRIs)] use and the risk of hip fracture. A cohort of patients with a first prescription for antidepressants (SSRI or tricyclic antidepressants) was extracted from the Dutch Mondriaan and Spanish Base de datos para la Investigación Farmacoepidemiológica en Atención Primaria (BIFAP) general practice databases for the period 2001-2009. The net (total) effect of SSRI versus no SSRI on the risk of hip fracture was estimated using time-varying Cox regression, stratification and covariate adjustment using the PS, and MSM. In MSM, censoring was accounted for by inverse probability of censoring weights. The crude hazard ratio (HR) of SSRI use versus no SSRI use on hip fracture was 1.75 (95%CI: 1.12, 2.72) in Mondriaan and 2.09 (1.89, 2.32) in BIFAP. After confounding adjustment using time-varying Cox regression, stratification, and covariate adjustment using the PS, HRs increased in Mondriaan [2.59 (1.63, 4.12), 2.64 (1.63, 4.25), and 2.82 (1.63, 4.25), respectively] and decreased in BIFAP [1.56 (1.40, 1.73), 1.54 (1.39, 1.71), and 1.61 (1.45, 1.78), respectively]. MSMs with stabilized weights yielded HR 2.15 (1.30, 3.55) in Mondriaan and 1.63 (1.28, 2.07) in BIFAP when accounting for censoring and 2.13 (1.32, 3.45) in Mondriaan and 1.66 (1.30, 2.12) in BIFAP without accounting for censoring. In this empirical study, differences between the different methods to control for time-dependent confounding were small. The observed differences in treatment effect estimates between the databases are likely attributable to different confounding information in the datasets, illustrating that adequate information on (time-varying) confounding is crucial to prevent bias. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Solutions to time variant problems of real-time expert systems

    NASA Technical Reports Server (NTRS)

    Yeh, Show-Way; Wu, Chuan-Lin; Hung, Chaw-Kwei

    1988-01-01

    Real-time expert systems for monitoring and control are driven by input data which changes with time. One of the subtle problems of this field is the propagation of time variant problems from rule to rule. This propagation problem is even complicated under a multiprogramming environment where the expert system may issue test commands to the system to get data and to access time consuming devices to retrieve data for concurrent reasoning. Two approaches are used to handle the flood of input data. Snapshots can be taken to freeze the system from time to time. The expert system treats the system as a stationary one and traces changes by comparing consecutive snapshots. In the other approach, when an input is available, the rules associated with it are evaluated. For both approaches, if the premise condition of a fired rule is changed to being false, the downstream rules should be deactivated. If the status change is due to disappearance of a transient problem, actions taken by the fired downstream rules which are no longer true may need to be undone. If a downstream rule is being evaluated, it should not be fired. Three mechanisms for solving this problem are discussed: tracing, backward checking, and censor setting. In the forward tracing mechanism, when the premise conditions of a fired rule become false, the premise conditions of downstream rules which have been fired or are being evaluated due to the firing of that rule are reevaluated. A tree with its root at the rule being deactivated is traversed. In the backward checking mechanism, when a rule is being fired, the expert system checks back on the premise conditions of the upstream rules that result in evaluation of the rule to see whether it should be fired. The root of the tree being traversed is the rule being fired. In the censor setting mechanism, when a rule is to be evaluated, a censor is constructed based on the premise conditions of the upstream rules and the censor is evaluated just before the rule is fired. Unlike the backward checking mechanism, this one does not search the upstream rules. This paper explores the details of implementation of the three mechanisms.

  11. Shock waves raised by explosions in space as sources of ultra-high-energy cosmic rays

    NASA Astrophysics Data System (ADS)

    Kichigin, Gennadiy

    2015-03-01

    The paper discusses the possibility of particle acceleration up to ultrahigh energies in the relativistic waves generated by various explosive processes in the interstellar medium. We propose to use the surfatron mechanism of acceleration (surfing) of charged particles trapped in the front of relativistic waves as a generator of high-energy cosmic rays (CRs). Conditions under which surfing in these waves can be made are studied thoroughly. Ultra-high-energy CRs (up to 10^20 eV) are shown to be obtained due to the surfing in the relativistic plane and spherical waves. Surfing is supposed to take place in nonlinear Langmuir waves excited by powerful electromagnetic radiation or relativistic beams of charged particles, as well as in strong shock waves generated by relativistic jets or spherical formations that expand fast (fireballs).

  12. Cosmic strings and the microwave sky. I - Anisotropy from moving strings

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A method is developed for calculating the component of the microwave anisotropy around cosmic string loops due to their rapidly changing gravitational fields. The method is only valid for impact parameters from the string much smaller than the horizon size at the time the photon passes the string. The method makes it possible to calculate the temperature pattern around arbitrary string configurations numerically in terms of one-dimensional integrals. This method is applied to temperature jump across a string, confirming and extending previous work. It is also applied to cusps and kinks on strings, and to determining the temperature pattern far from a strong loop. The temperature pattern around a few loop configurations is explicitly calculated. Comparisons with the work of Brandenberger et al. (1986) indicates that they have overestimated the MBR anisotropy from gravitational radiation emitted from loops.

  13. Historical Auroras in the 990s: Evidence of Great Magnetic Storms

    NASA Astrophysics Data System (ADS)

    Hayakawa, Hisashi; Tamazawa, Harufumi; Uchiyama, Yurina; Ebihara, Yusuke; Miyahara, Hiroko; Kosaka, Shunsuke; Iwahashi, Kiyomi; Isobe, Hiroaki

    2017-01-01

    A significant carbon-14 enhancement has recently been found in tree rings for the year 994, suggesting an extremely strong and brief cosmic ray flux event. The origin of this particular cosmic ray event has not been confirmed, but one possibility is that it might be of solar origin. Contemporary historical records of low-latitude auroras can be used as supporting evidence of intense solar activity around that time. We investigate previously reported as well as new records that have been found in contemporary observations from the 990s to determine potential auroras. Records of potential red auroras in late 992 and early 993 were found around the world, i.e. in the Korean Peninsula, Saxonian cities in modern Germany, and the Island of Ireland, suggesting the occurrence of an intense geomagnetic storm driven by solar activity.

  14. 74 MHz nonthermal emission from molecular clouds: evidence for a cosmic ray dominated region at the galactic center.

    PubMed

    Yusef-Zadeh, F; Wardle, M; Lis, D; Viti, S; Brogan, C; Chambers, E; Pound, M; Rickert, M

    2013-10-03

    We present 74 MHz radio continuum observations of the Galactic center region. These measurements show nonthermal radio emission arising from molecular clouds that is unaffected by free–free absorption along the line of sight. We focus on one cloud, G0.13-0.13, representative of the population of molecular clouds that are spatially correlated with steep spectrum (α(327MHz)(74MHz) = 1.3 ± 0.3) nonthermal emission from the Galactic center region. This cloud lies adjacent to the nonthermal radio filaments of the Arc near l 0.2° and is a strong source of 74 MHz continuum, SiO (2-1), and Fe I Kα 6.4 keV line emission. This three-way correlation provides the most compelling evidence yet that relativistic electrons, here traced by 74 MHz emission, are physically associated with the G0.13-0.13 molecular cloud and that low-energy cosmic ray electrons are responsible for the Fe I Kα line emission. The high cosmic ray ionization rate 10(–1)3 s(–1) H(–1) is responsible for heating the molecular gas to high temperatures and allows the disturbed gas to maintain a high-velocity dispersion. Large velocity gradient (LVG) modeling of multitransition SiO observations of this cloud implies H2 densities 10(4–5) cm(–3) and high temperatures. The lower limit to the temperature of G0.13-0.13 is 100 K, whereas the upper limit is as high as 1000 K. Lastly, we used a time-dependent chemical model in which cosmic rays drive the chemistry of the gas to investigate for molecular line diagnostics of cosmic ray heating. When the cloud reaches chemical equilibrium, the abundance ratios of HCN/HNC and N2H+/HCO+ are consistent with measured values. In addition, significant abundance of SiO is predicted in the cosmic ray dominated region of the Galactic center. We discuss different possibilities to account for the origin of widespread SiO emission detected from Galactic center molecular clouds.

  15. The Paradoxical Role of the Research Administrator.

    ERIC Educational Resources Information Center

    White, Virginia P.

    1991-01-01

    This reprinted 1970 article examines the role of the university research administrator and finds that the role involves paradoxes between controller and entrepreneur, master and slave, censor and publicist, and traditionalist and innovator. (DB)

  16. Proportional hazards model with varying coefficients for length-biased data.

    PubMed

    Zhang, Feipeng; Chen, Xuerong; Zhou, Yong

    2014-01-01

    Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.

  17. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  18. Atrazine concentrations in near-surface aquifers: A censored regression approach

    USGS Publications Warehouse

    Liu, S.; Yen, S.T.; Kolpin, D.W.

    1996-01-01

    In 1991, the U.S. Geological Survey (USGS) conducted a study to investigate the occurrence of atrazine (2-chloro-4-ethylamino-6- isopropylamino-s-triazine) and other agricultural chemicals in near-surface aquifers in the midcontinental USA. Because about 83% of the atrazine concentrations from the USGS study were censored, standard statistical estimation procedures could not be used. To determine factors that affect atrazine concentrations in groundwater while accommodating the high degree of data censoring. Tobit models were used (normal homoscedastic, normal heteroscedastic, lognormal homoscedastic, and lognormal heteroscedastic). Empirical results suggest that the lognormal heteroscedastic Tobit model is the model of choice for this type of study. This model determined the following factors to have the strongest effect on atrazine concentrations in groundwater: percent of pasture within 3.2 km, percent of forest within 3.2 km (2 mi), mean open interval of the well, primary water use of a well, aquifer class (unconsolidated or bedrock), aquifer type (unconfined or confined), existence of a stream within 30 m (100 ft), existence of a stream within 30 m to 0.4 km (0.25 mi), and existence of a stream within 0.4 to 3.2 km. Examining the elasticities of the continuous explanatory factors provides further insight into their effects on atrazine concentrations in groundwater. This study documents a viable statistical method that can be used to accommodate the complicating presence of censured data, a feature that commonly occurs in environmental data.

  19. Course of serological tests in treated subjects with chronic Trypanosoma cruzi infection: a systematic review and meta-analysis of individual participant data.

    PubMed

    Sguassero, Yanina; Roberts, Karen N; Harvey, Guillermina B; Comandé, Daniel; Ciapponi, Agustín; Cuesta, Cristina B; Danesi, Emmaría; Aguiar, Camila; Andrade, Ana L; Castro, Ana Mde; Lana, Marta de; Escribà, Josep M; Fabbro, Diana L; Fernandes, Cloé D; Meira, Wendell Sf; Flores-Chávez, María; Hasslocher-Moreno, Alejandro M; Jackson, Yves; Lacunza, Carlos D; Machado-de-Assis, Girley F; Maldonado, Marisel; Monje-Rumi, María M; Molina, Israel; Martín, Catalina Muñoz-San; Murcia, Laura; Castro, Cleudson Nery de; Silveira, Celeste An; Negrette, Olga Sánchez; Segovia, Manuel; Solari, Aldo; Steindel, Mário; Streiger, Mirtha L; Bilbao, Ninfa Vera de; Zulantay, Inés; Sosa-Estani, Sergio

    2018-06-04

    To determine the course of serological tests in subjects with chronic T. cruzi infection treated with antitrypanosomal drugs. We conducted a systematic review and meta-analysis using individual participant data. Survival analysis and Cox proportional hazards regression model with a random effect to adjust for covariates were applied. The protocol was registered at www.crd.york.ac.uk/PROSPERO (CRD42012002162). We included 27 studies (1296 subjects) conducted in eight countries. The risk of bias was low for all domains in 17 studies (63.0%). We assessed 913 subjects (149 seroreversion events, 83.7% censored data) for ELISA, 670 subjects (134 events, 80.0% censored) for IIF, and 548 subjects (99 events, 82.0% censored) for IHA. A higher probability of seroreversion was observed in subjects aged 1-19 years compared to adults at a shorter time span. The chance of seroreversion also varied according to the country where the infection might have been acquired. For instance, the pooled adjusted hazard ratio between children/adolescents and adults for IIF test was 1.54 (95% CI 0.64-3.71) and 9.37 (3.44-25.50) in some countries of South America and Brazil, respectively. The disappearance of anti-T. cruzi antibodies was demonstrated along the follow-up. An interaction between age at treatment and country setting was shown. Copyright © 2018. Published by Elsevier Ltd.

  20. Comparing two correlated C indices with right-censored survival outcome: a one-shot nonparametric approach.

    PubMed

    Kang, Le; Chen, Weijie; Petrick, Nicholas A; Gallas, Brandon D

    2015-02-20

    The area under the receiver operating characteristic curve is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of area under the receiver operating characteristic curve, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics-based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. Copyright © 2014 John Wiley & Sons, Ltd.

  1. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V.

    PubMed

    Wall, Michael; Zamba, Gideon K D; Artes, Paul H

    2018-01-01

    It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.

  2. Modeling time-to-event (survival) data using classification tree analysis.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  3. Maps & minds : mapping through the ages

    USGS Publications Warehouse

    ,

    1984-01-01

    Throughout time, maps have expressed our understanding of our world. Human affairs have been influenced strongly by the quality of maps available to us at the major turning points in our history. "Maps & Minds" traces the ebb and flow of a few central ideas in the mainstream of mapping. Our expanding knowledge of our cosmic neighborhood stems largely from a small number of simple but grand ideas, vigorously pursued.

  4. Polycyclic aromatic hydrocarbon ions and the diffuse interstellar bands

    NASA Technical Reports Server (NTRS)

    Salama, F.; Allamandola, L. J.

    1995-01-01

    Neutral naphthalene (C10H8), phenanthrene (C14H10), and pyrene (C16H10) absorb strongly in the ultraviolet and may contribute to the extinction curve. High abundances are required to produce detectable structures. The cations of these Polycyclic Aromatic Hydrocarbons (PAHs) absorb in the visible. C10H8(+) has 12 discrete absorption bands which fall between 6800 and 5000 A. The strongest band at 6741 A falls close to the weak 6742 A diffuse interstellar band (DIB). Five other weaker bands also match DIBs. The possibility that C10H8(+) is responsible for some of the DIBs can be tested by searching for new DIBS at 6520, 6151, and 5965 A, other moderately strong naphthalene cation band positions. If C10H8(+) is indeed responsible for the 6742 A feature, it accounts for 0.3% of the cosmic carbon. The spectrum of C16H10(+) is dominated by a strong band at 4435 A in an Ar matrix and 4395 A in a Ne matrix, a position which falls very close to the strongest DIB, that at 4430 A. If C16H10(+), or a closely related pyrene-like ion is indeed responsible for the 4430 A feature, it accounts for 0.2% of the cosmic carbon. We also report an intense, very broad UV-to-visible continuum which is associated with both ions and could explain how PAHs convert interstellar UV and visible radiation into IR.

  5. Neutral and ionized polycyclic aromatic hydrocarbons, diffuse interstellar bands and the ultraviolet extinction curve

    NASA Technical Reports Server (NTRS)

    Salama, Farid; Allamandola, Louis John

    1993-01-01

    Neutral naphthalene C10H8, phenanthrene C14H10 and pyrene C16H10 absorb strongly in the ultraviolet region and may contribute to the extinction curve. High abundances are required to produce detectable structures. The cations of these polycyclic aromatic hydrocarbons (PAHs) absorb in the visible C10H8(+) has 13 discrete absorption bands which fall between 6800 and 4500 A. The strongest band at 6741 A falls close to the weak 6742 A diffuse interstellar band (DIB). Five other weaker bands also match DIBs. The possibility that C10H8(+) is responsible for some of the DIBs can be tested by searching for new DIBs at 6520 and 6151 A, other strong naphthalene cation band positions. If C10H8(+) is indeed responsible for the 6742 A feature, it accounts for 0.3% of the cosmic carbon. The spectrum of C16H10(+) is dominated by a strong band at 4435 A in an Ar matrix and 4395 A in Ne, wavelengths which fall very close to the strongest DIB at 4430 A. If C16H10(+) or a closely related pyrene-like ion, is indeed responsible for the 4430 A feature, it accounts for 0.2% of the cosmic carbon. An intense, very broad UV-to-visible continuum is reported which is associated with both ions and could explain how PAHs convert interstellar UV and visible radiation into IR radiation.

  6. Mildly obscured active galaxies and the cosmic X-ray background

    NASA Astrophysics Data System (ADS)

    Esposito, V.; Walter, R.

    2016-05-01

    Context. The diffuse cosmic X-ray background (CXB) is the sum of the emission of discrete sources, mostly massive black-holes accreting matter in active galactic nuclei (AGN). The CXB spectrum differs from the integration of the spectra of individual sources, calling for a large population, undetected so far, of strongly obscured Compton-thick AGN. Such objects are predicted by unified models, which attribute most of the AGN diversity to their inclination on the line of sight, and play an important role for the understanding of the growth of black holes in the early Universe. Aims: The percentage of strongly obscured Compton-thick AGN at low redshift can be derived from the observed CXB spectrum, if we assume AGN spectral templates and luminosity functions. Methods: We show that high signal-to-noise stacked hard X-ray spectra, derived from more than a billion seconds of effective exposure time with the Swift/BAT instrument, imply that mildly obscured Compton-thin AGN feature a strong reflection and contribute massively to the CXB. Results: A population of Compton-thick AGN larger than that which is effectively detected is not required to reproduce the CXB spectrum, since no more than 6% of the CXB flux can be attributed to them. The stronger reflection observed in mildly obscured AGN suggests that the covering factor of the gas and dust surrounding their central engines is a key factor in shaping their appearance. These mildly obscured AGN are easier to study at high redshift than Compton-thick sources are.

  7. Industry-University Collaborations in Canada, Japan, the UK and USA – With Emphasis on Publication Freedom and Managing the Intellectual Property Lock-Up Problem

    PubMed Central

    Kneller, Robert; Mongeon, Marcel; Cope, Jeff; Garner, Cathy; Ternouth, Philip

    2014-01-01

    As industry-university collaborations are promoted to commercialize university research and foster economic growth, it is important to understand how companies benefit from these collaborations, and to ensure that resulting academic discoveries are developed for the benefit of all stakeholders: companies, universities and public. Lock up of inventions, and censoring of academic publications, should be avoided if feasible. This case-study analysis of interviews with 90 companies in Canada, Japan, the UK and USA assesses the scope of this challenge and suggests possible resolutions. The participating companies were asked to describe an important interaction with universities, and most described collaborative research. The most frequently cited tensions concerned intellectual property management and publication freedom. IP disagreements were most frequent in the context of narrowly-focused collaborations with American universities. However, in the case of exploratory research, companies accepted the IP management practices of US universities. It might make sense to let companies have an automatic exclusive license to IP from narrowly defined collaborations, but to encourage universities to manage inventions from exploratory collaborations to ensure development incentives. Although Canada, the UK and US have strong publication freedom guarantees, tensions over this issue arose frequently in focused collaborations, though were rare in exploratory collaborations. The UK Lambert Agreements give sponsors the option to control publications in return for paying the full economic cost of a project. This may offer a model for the other three countries. Uniquely among the four countries, Japan enables companies to control exclusively most collaborative inventions and to censor academic publications. Despite this high degree of control, the interviews suggest many companies do not develop university discoveries to their full potential. The steps suggested above may rebalance the situation in Japan. Overall, the interviews reveal the complexity of these issues and the need for flexibility on the part of universities and companies. PMID:24632805

  8. Associations of renal function at 1-year after kidney transplantation with subsequent return to dialysis, mortality, and healthcare costs.

    PubMed

    Schnitzler, Mark A; Johnston, Karissa; Axelrod, David; Gheorghian, Adrian; Lentine, Krista L

    2011-06-27

    Improved early kidney transplant outcomes limit the contemporary utility of standard clinical endpoints. Quantifying the relationship of renal function at 1 year after transplant with subsequent clinical outcomes and healthcare costs may facilitate cost-benefit evaluations among transplant recipients. Data for Medicare-insured kidney-only transplant recipients (1995-2003) were drawn from the United States Renal Data System. Associations of estimated glomerular filtration rate (eGFR) level at the first transplant anniversary with subsequent death-censored graft failure and patient death in posttransplant years 1 to 3 and 4 to 7 were examined by parametric survival analysis. Associations of eGFR with total health care costs defined by Medicare payments were assessed with multivariate linear regression. Among 38,015 participants, first anniversary eGFR level demonstrated graded associations with subsequent outcomes. Compared with patients with 12-month eGFR more than or equal to 60 mL/min/1.73 m, the adjusted relative risk of death-censored graft failure in years 1 to 3 was 31% greater for eGFR 45 to 59 mL/min/1.73 m (P<0.0001) and 622% greater for eGFR 15 to 30 mL/min/1.73 m (P<0.0001). Associations of first anniversary eGFR level with graft failure and mortality remained significant in years 4 to 7. The proportions of recipients expected to return to dialysis or die attributable to eGFR less than 60 mL/min/1.73 m over 10 years were 23.1% and 9.4%, respectively, and were significantly higher than proportions attributable to delayed graft function or acute rejection. Reduced eGFR was associated with graded and significant increases in health care spending during years 2 and 3 after transplant (P<0.0001). eGFR is strongly associated with clinical and economic outcomes after kidney transplantation.

  9. Cancer survival analysis using semi-supervised learning method based on Cox and AFT models with L1/2 regularization.

    PubMed

    Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2016-03-01

    One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi-supervised learning model is one more appropriate tool for survival analysis in clinical cancer research.

  10. Habitability in the Local Universe

    NASA Astrophysics Data System (ADS)

    Mason, Paul A.

    2017-01-01

    Long term habitability on the surface of planets has as a prerequisite a minimum availability of elements to build rocky planets, their atmospheres, and for life sustaining water. They must be within the habitable zone and avoid circumstances that cause them to lose their atmospheres and water. However, many astrophysical sources are hazardous to life on the surfaces of planets. Planets in harsh environments may require strong magnetic fields to protect their biospheres from high energy particles from the host star(s). Planets in harsh environments may additionally require a strong astrosphere to be sufficiently able to deflect galactic cosmic-rays. Supernovae (SNe) play a central role in the habitability of planets in the disks of star forming galaxies. Currently, the SNe rate maintains a relativistic galactic wind shielding planets in the disk from extragalactic cosmic rays. However, if the density of SNe in the disk of the galaxy were significantly higher, as it was 6-8 GYA, the frequency of nearby catastrophic events and often prolonged harsh environment may have strongly constrained life in the early history of the Milky Way. Active galactic nuclei (AGN) may remain quiescent for hundreds of millions of years only to activate for some time due extraordinary accretion episode due to for instance a galactic merger. The starburst galaxy M82 is currently undergoing a merger, probably strongly compromising habitability within that galaxy. The giant elliptical M87 resides in the center of the Virgo supercluster and has probably consumed many such spiral galaxies. We show that super-Eddington accretion onto the supermassive black hole in M87, even for a short while, could compromise the habitability for a large portion of the central supercluster. We discuss environments where these effects may be mitigated.

  11. Cosmic shear as a probe of galaxy formation physics

    DOE PAGES

    Foreman, Simon; Becker, Matthew R.; Wechsler, Risa H.

    2016-09-01

    Here, we evaluate the potential for current and future cosmic shear measurements from large galaxy surveys to constrain the impact of baryonic physics on the matter power spectrum. We do so using a model-independent parametrization that describes deviations of the matter power spectrum from the dark-matter-only case as a set of principal components that are localized in wavenumber and redshift. We perform forecasts for a variety of current and future data sets, and find that at least ~90 per cent of the constraining power of these data sets is contained in no more than nine principal components. The constraining powermore » of different surveys can be quantified using a figure of merit defined relative to currently available surveys. With this metric, we find that the final Dark Energy Survey data set (DES Y5) and the Hyper Suprime-Cam Survey will be roughly an order of magnitude more powerful than existing data in constraining baryonic effects. Upcoming Stage IV surveys (Large Synoptic Survey Telescope, Euclid, and Wide Field Infrared Survey Telescope) will improve upon this by a further factor of a few. We show that this conclusion is robust to marginalization over several key systematics. The ultimate power of cosmic shear to constrain galaxy formation is dependent on understanding systematics in the shear measurements at small (sub-arcminute) scales. Lastly, if these systematics can be sufficiently controlled, cosmic shear measurements from DES Y5 and other future surveys have the potential to provide a very clean probe of galaxy formation and to strongly constrain a wide range of predictions from modern hydrodynamical simulations.« less

  12. Understanding uncertainties in modeling the galactic diffuse gamma-ray emission

    NASA Astrophysics Data System (ADS)

    Storm, Emma; Calore, Francesca; Weniger, Christoph

    2017-01-01

    The nature of the Galactic diffuse gamma-ray emission as measured by the Fermi Gamma-ray Space Telescope has remained an active area of research for the last several years. A standard technique to disentangle the origins of the diffuse emission is the template fitting approach, where predictions for various diffuse components, such as emission from cosmic rays derived from Galprop or Dragon, are compared to the data. However, this method always results in an overall bad fit to the data, with strong residuals that are difficult to interpret. Additionally, there are instrinsic uncertainties in the predicted templates that are not accounted for naturally with this method. We therefore introduce a new template fitting approach to study the various components of the Galactic diffuse gamma-ray emission, and their correlations and uncertainties. We call this approach Sky Factorization with Adaptive Constrained Templates (SkyFACT). Rather than using fixed predictions from cosmic-ray propagation codes and examining the residuals to evaluate the quality of fits and the presence of excesses, we introduce additional fine-grained variations in the templates that account for uncertainties in the predictions, such as uncertainties in the gas tracers and from small scale variations in the density of cosmic rays. We show that fits to the gamma-ray diffuse emission can be dramatically improved by including an appropriate level of uncertainty in the initial spatial templates from cosmic-ray propagation codes. We further show that we can recover the morphology of the Fermi Bubbles from its spectrum alone with SkyFACT.

  13. New Cosmic Scales as a Cornerstone for the Evolutionary Processes, Energetic Resources and Activity Phenomena of the Non-Stable Universe

    NASA Astrophysics Data System (ADS)

    Avetissian, A. K.

    2017-07-01

    New cosmic scales, completely different from the Plank's scales, have been disclosed in the frame of so called “Non-Inflationary Cosmology” (NIC), created by the author during last decade. The proposed new ideas shed light on some hidden inaccuracies within the essence of Planck's scales in Modern Cosmology, so the new scales have been nominated as “NAIRI (New Alternative Ideas Regenerating Irregularities) Cosmic Scales” (NCS). The NCS is believed to be realistic due to qualitative and quantitative correspondences with observational and experimental data. The basic concept about NCS has been created based on two hypotheses about cosmological time-evolution of Planck's constant and multi-photon processes. Together with the hypothesis about domination of Bose-statistics in the early Universe and the possibility of large-scale Bose-condensate, these predictions have been converted into phenomena, based on which the bases of alternative theory of cosmology have been investigated. The predicted by the author “Cosmic Small (Local) Bang” (CSB) phenomenon has been investigated in the model of galaxy, and as a consequence of CSB the possibility of Super-Strong Shock Wave (SSW) has been postulated. Thus, based on phenomena CSB and SSW, NIC guarantees the non-accretion mechanism of generation of galaxies and super-massive black holes in their core, as well as creation of supernovas and massive stars (super-massive stars exceeding also 100M⊙). The possibility of gravitational radiation (GR) by the central black hole of the galaxy, even by the disk (or whole galaxy!) has been investigated.

  14. Cosmic Rays: "A Thin Rain of Charged Particles."

    ERIC Educational Resources Information Center

    Friedlander, Michael

    1990-01-01

    Discussed are balloons and electroscopes, understanding cosmic rays, cosmic ray paths, isotopes and cosmic-ray travel, sources of cosmic rays, and accelerating cosmic rays. Some of the history of the discovery and study of cosmic rays is presented. (CW)

  15. de Sitter space as a tensor network: Cosmic no-hair, complementarity, and complexity

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Cao, ChunJun; Carroll, Sean M.; Chatwin-Davies, Aidan

    2017-12-01

    We investigate the proposed connection between de Sitter spacetime and the multiscale entanglement renormalization ansatz (MERA) tensor network, and ask what can be learned via such a construction. We show that the quantum state obeys a cosmic no-hair theorem: the reduced density operator describing a causal patch of the MERA asymptotes to a fixed point of a quantum channel, just as spacetimes with a positive cosmological constant asymptote to de Sitter space. The MERA is potentially compatible with a weak form of complementarity (local physics only describes single patches at a time, but the overall Hilbert space is infinite dimensional) or, with certain specific modifications to the tensor structure, a strong form (the entire theory describes only a single patch plus its horizon, in a finite-dimensional Hilbert space). We also suggest that de Sitter evolution has an interpretation in terms of circuit complexity, as has been conjectured for anti-de Sitter space.

  16. Wide-field LOFAR-LBA power-spectra analyses: Impact of calibration, polarization leakage and ionosphere

    NASA Astrophysics Data System (ADS)

    Gehlot, Bharat K.; Koopmans, Léon V. E.

    2018-05-01

    Contamination due to foregrounds, calibration errors and ionospheric effects pose major challenges in detection of the cosmic 21 cm signal in various Epoch of Reionization (EoR) experiments. We present the results of a study of a field centered on 3C196 using LOFAR Low Band observations, where we quantify various wide field and calibration effects such as gain errors, polarized foregrounds, and ionospheric effects. We observe a `pitchfork' structure in the power spectrum of the polarized intensity in delay-baseline space, which leaks into the modes beyond the instrumental horizon. We show that this structure arises due to strong instrumental polarization leakage (~30%) towards Cas A which is far away from primary field of view. We measure a small ionospheric diffractive scale towards CasA resembling pure Kolmogorov turbulence. Our work provides insights in understanding the nature of aforementioned effects and mitigating them in future Cosmic Dawn observations.

  17. Effect of a magnetic field on Schwinger mechanism in de Sitter spacetime

    NASA Astrophysics Data System (ADS)

    Bavarsad, Ehsan; Kim, Sang Pyo; Stahl, Clément; Xue, She-Sheng

    2018-01-01

    We investigate the effect of a uniform magnetic field background on scalar QED pair production in a four-dimensional de Sitter spacetime (dS4 ). We obtain a pair production rate which agrees with the known Schwinger result in the limit of Minkowski spacetime and with Hawking radiation in dS spacetime in the zero electric field limit. Our results describe how the cosmic magnetic field affects the pair production rate in cosmological setups. In addition, using the zeta function regularization scheme we calculate the induced current and examine the effect of a magnetic field on the vacuum expectation value of the current operator. We find that, in the case of a strong electromagnetic background the current responds as E .B , while in the infrared regime, it responds as B /E , which leads to a phenomenon of infrared hyperconductivity. These results for the induced current have important applications for the cosmic magnetic field evolution.

  18. Understanding the impact of Light cone effect on the EoR/CD 21-cm power spectrum

    NASA Astrophysics Data System (ADS)

    Datta, Kanan K.; Mondal, Rajesh; Ghara, Raghunath; Bharadwaj, Somnath; Choudhury, T. Roy

    2018-05-01

    Redshifted HI 21-cm signal from the cosmic dawn and epoch of reionization evolve considerably along the LoS. We study the impact of this evolution (so called the light cone effect) on the HI 21-cm power spectrum. It is found that the LC effect has a significant impact on the 3D power spectrum and the change could be up to a factor of few. The LC effect is particularly strong during the cosmic dawn near the `peaks' and `dips' in the power spectrum when plotted with redshift. We also show that the 3D power spectrum, which could fully describe ergodic and periodic signal, losses out some information regarding the second order statistics of the signal as the EoR/CD 21-cm signal is non-ergodic and non-periodic along the line of sight. We show that the multi-frequency angular power spectrum (MAPS) \\ell (\

  19. Cosmic curvature tested directly from observations

    NASA Astrophysics Data System (ADS)

    Denissenya, Mikhail; Linder, Eric V.; Shafieloo, Arman

    2018-03-01

    Cosmic spatial curvature is a fundamental geometric quantity of the Universe. We investigate a model independent, geometric approach to measure spatial curvature directly from observations, without any derivatives of data. This employs strong lensing time delays and supernova distance measurements to measure the curvature itself, rather than just testing consistency with flatness. We define two curvature estimators, with differing error propagation characteristics, that can crosscheck each other, and also show how they can be used to map the curvature in redshift slices, to test constancy of curvature as required by the Robertson-Walker metric. Simulating realizations of redshift distributions and distance measurements of lenses and sources, we estimate uncertainties on the curvature enabled by next generation measurements. The results indicate that the model independent methods, using only geometry without assuming forms for the energy density constituents, can determine the curvature at the ~6×10‑3 level.

  20. Dark matter (energy) may be indistinguishable from modified gravity (MOND)

    NASA Astrophysics Data System (ADS)

    Sivaram, C.

    For Newtonian dynamics to hold over galactic scales, large amounts of dark matter (DM) are required which would dominate cosmic structures. Accounting for the strong observational evidence that the universe is accelerating requires the presence of an unknown dark energy (DE) component constituting about 70% of the matter. Several ingenious ongoing experiments to detect the DM particles have so far led to negative results. Moreover, the comparable proportions of the DM and DE at the present epoch appear unnatural and not predicted by any theory. For these reasons, alternative ideas like MOND and modification of gravity or general relativity over cosmic scales have been proposed. It is shown in this paper that these alternate ideas may not be easily distinguishable from the usual DM or DE hypotheses. Specific examples are given to illustrate this point that the modified theories are special cases of a generalized DM paradigm.

Top