Exotica and the status of the strong cosmic censor conjecture in four dimensions
NASA Astrophysics Data System (ADS)
Etesi, Gábor
2017-12-01
An immense class of physical counterexamples to the four dimensional strong cosmic censor conjecture—in its usual broad formulation—is exhibited. More precisely, out of any closed and simply connected 4-manifold an open Ricci-flat Lorentzian 4-manifold is constructed which is not globally hyperbolic, and no perturbation of which, in any sense, can be globally hyperbolic. This very stable non-global-hyperbolicity is the consequence of our open spaces having a ‘creased end’—i.e. an end diffeomorphic to an exotic \
Testing the weak gravity-cosmic censorship connection
NASA Astrophysics Data System (ADS)
Crisford, Toby; Horowitz, Gary T.; Santos, Jorge E.
2018-03-01
A surprising connection between the weak gravity conjecture and cosmic censorship has recently been proposed. In particular, it was argued that a promising class of counterexamples to cosmic censorship in four-dimensional Einstein-Maxwell-Λ theory would be removed if charged particles (with sufficient charge) were present. We test this idea and find that indeed if the weak gravity conjecture is true, one cannot violate cosmic censorship this way. Remarkably, the minimum value of charge required to preserve cosmic censorship appears to agree precisely with that proposed by the weak gravity conjecture.
Gauge-flation and cosmic no-hair conjecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maleknejad, A.; Sheikh-Jabbari, M.M.; Soda, Jiro, E-mail: azade@ipm.ir, E-mail: jabbari@theory.ipm.ac.ir, E-mail: jiro@tap.scphys.kyoto-u.ac.jp
2012-01-01
Gauge-flation, inflation from non-Abelian gauge fields, was introduced in [1, 2]. In this work, we study the cosmic no-hair conjecture in gauge-flation. Starting from Bianchi-type I cosmology and through analytic and numeric studies we demonstrate that the isotropic FLRW inflation is an attractor of the dynamics of the theory and that the anisotropies are damped within a few e-folds, in accord with the cosmic no-hair conjecture.
On the validity of cosmic no-hair conjecture in an anisotropic inationary model
NASA Astrophysics Data System (ADS)
Do, Tuan Q.
2018-05-01
We will present main results of our recent investigations on the validity of cosmic no-hair conjecture proposed by Hawking and his colleagues long time ago in the framework of an anisotropic inflationary model proposed by Kanno, Soda, and Watanabe. As a result, we will show that the cosmic no-hair conjecture seems to be generally violated in the Kanno-Soda- Watanabe model for both canonical and non-canonical scalar fields due to the existence of a non-trivial coupling term between scalar and electromagnetic fields. However, we will also show that the validity of the cosmic no-hair conjecture will be ensured once a unusual scalar field called the phantom field, whose kinetic energy term is negative definite, is introduced into the Kanno-Soda-Watanabe model.
Violating the Weak Cosmic Censorship Conjecture in Four-Dimensional Anti-de Sitter Space
NASA Astrophysics Data System (ADS)
Crisford, Toby; Santos, Jorge E.
2017-05-01
We consider time-dependent solutions of the Einstein-Maxwell equations using anti-de Sitter (AdS) boundary conditions, and provide the first counterexample to the weak cosmic censorship conjecture in four spacetime dimensions. Our counterexample is entirely formulated in the Poincaré patch of AdS. We claim that our results have important consequences for quantum gravity, most notably to the weak gravity conjecture.
Violating the Weak Cosmic Censorship Conjecture in Four-Dimensional Anti-de Sitter Space.
Crisford, Toby; Santos, Jorge E
2017-05-05
We consider time-dependent solutions of the Einstein-Maxwell equations using anti-de Sitter (AdS) boundary conditions, and provide the first counterexample to the weak cosmic censorship conjecture in four spacetime dimensions. Our counterexample is entirely formulated in the Poincaré patch of AdS. We claim that our results have important consequences for quantum gravity, most notably to the weak gravity conjecture.
Cosmic censorship and Weak Gravity Conjecture in the Einstein-Maxwell-dilaton theory
NASA Astrophysics Data System (ADS)
Yu, Ten-Yeh; Wen, Wen-Yu
2018-06-01
We explore the cosmic censorship in the Einstein-Maxwell-dilaton theory following Wald's thought experiment to destroy a black hole by throwing in a test particle. We discover that at probe limit the extremal charged dilaton black hole could be destroyed by a test particle with specific energy. Nevertheless the censorship is well protected if backreaction or self-force is included. At the end, we discuss an interesting connection between Hoop Conjecture and Weak Gravity Conjecture.
Numerical Tests of the Cosmic Censorship Conjecture via Event-Horizon Finding
NASA Astrophysics Data System (ADS)
Okounkova, Maria; Ott, Christian; Scheel, Mark; Szilagyi, Bela
2015-04-01
We present the current state of our research on the possibility of naked singularity formation in gravitational collapse, numerically testing both the cosmic censorship conjecture and the hoop conjecture. The former of these posits that all singularities lie behind an event horizon, while the later conjectures that this is true if collapse occurs from an initial configuration with all circumferences C <= 4 πM . We reconsider the classical Shapiro & Teukolsky (1991) prolate spheroid naked singularity scenario. Using the exponentially error-convergent Spectral Einstein Code (SpEC) we simulate the collapse of collisionless matter and probe for apparent horizons. We propose a new method to probe for the existence of an event horizon by following characteristic from regions near the singularity, using methods commonly employed in Cauchy characteristic extraction. This research was partially supported by NSF under Award No. PHY-1404569.
NASA Astrophysics Data System (ADS)
Isenberg, James
2017-01-01
The Hawking-Penrose theorems tell us that solutions of Einstein's equations are generally singular, in the sense of the incompleteness of causal geodesics (the paths of physical observers). These singularities might be marked by the blowup of curvature and therefore crushing tidal forces, or by the breakdown of physical determinism. Penrose has conjectured (in his `Strong Cosmic Censorship Conjecture`) that it is generically unbounded curvature that causes singularities, rather than causal breakdown. The verification that ``AVTD behavior'' (marked by the domination of time derivatives over space derivatives) is generically present in a family of solutions has proven to be a useful tool for studying model versions of Strong Cosmic Censorship in that family. I discuss some of the history of Strong Cosmic Censorship, and then discuss what is known about AVTD behavior and Strong Cosmic Censorship in families of solutions defined by varying degrees of isometry, and discuss recent results which we believe will extend this knowledge and provide new support for Strong Cosmic Censorship. I also comment on some of the recent work on ``Weak Null Singularities'', and how this relates to Strong Cosmic Censorship.
End Point of Black Ring Instabilities and the Weak Cosmic Censorship Conjecture.
Figueras, Pau; Kunesch, Markus; Tunyasuvunakool, Saran
2016-02-19
We produce the first concrete evidence that violation of the weak cosmic censorship conjecture can occur in asymptotically flat spaces of five dimensions by numerically evolving perturbed black rings. For certain thin rings, we identify a new, elastic-type instability dominating the evolution, causing the system to settle to a spherical black hole. However, for sufficiently thin rings the Gregory-Laflamme mode is dominant, and the instability unfolds similarly to that of black strings, where the horizon develops a structure of bulges connected by necks which become ever thinner over time.
Numerical Tests of the Cosmic Censorship Conjecture with Collisionless Matter Collapse
NASA Astrophysics Data System (ADS)
Okounkova, Maria; Hemberger, Daniel; Scheel, Mark
2016-03-01
We present our results of numerical tests of the weak cosmic censorship conjecture (CCC), which states that generically, singularities of gravitational collapse are hidden within black holes, and the hoop conjecture, which states that black holes form when and only when a mass M gets compacted into a region whose circumference in every direction is C <= 4 πM . We built a smooth particle methods module in SpEC, the Spectral Einstein Code, to simultaneously evolve spacetime and collisionless matter configurations. We monitor RabcdRabcd for singularity formation, and probe for the existence of apparent horizons. We include in our simulations the prolate spheroid configurations considered in Shapiro and Teukolsky's 1991 numerical study of the CCC. This research was partially supported by the Dominic Orr Fellowship at Caltech.
Weak cosmic censorship: as strong as ever.
Hod, Shahar
2008-03-28
Spacetime singularities that arise in gravitational collapse are always hidden inside of black holes. This is the essence of the weak cosmic censorship conjecture. The hypothesis, put forward by Penrose 40 years ago, is still one of the most important open questions in general relativity. In this Letter, we reanalyze extreme situations which have been considered as counterexamples to the weak cosmic censorship conjecture. In particular, we consider the absorption of scalar particles with large angular momentum by a black hole. Ignoring back reaction effects may lead one to conclude that the incident wave may overspin the black hole, thereby exposing its inner singularity to distant observers. However, we show that when back reaction effects are properly taken into account, the stability of the black-hole event horizon is irrefutable. We therefore conclude that cosmic censorship is actually respected in this type of gedanken experiments.
Cosmic censorship conjecture in Kerr-Sen black hole
NASA Astrophysics Data System (ADS)
Gwak, Bogeun
2017-06-01
The validity of the cosmic censorship conjecture for the Kerr-Sen black hole, which is a solution to the low-energy effective field theory for four-dimensional heterotic string theory, is investigated using charged particle absorption. When the black hole absorbs the particle, the charge on it changes owing to the conserved quantities of the particle. Changes in the black hole are constrained to the equation for the motion of the particle and are consistent with the laws of thermodynamics. Particle absorption increases the mass of the Kerr-Sen black hole to more than that of the absorbed charges such as angular momentum and electric charge; hence, the black hole cannot be overcharged. In the near-extremal black hole, we observe a violation of the cosmic censorship conjecture for the angular momentum in the first order of expansion and the electric charge in the second order. However, considering an adiabatic process carrying the conserved quantities as those of the black hole, we prove the stability of the black hole horizon. Thus, we resolve the violation. This is consistent with the third law of thermodynamics.
Inflation in a renormalizable cosmological model and the cosmic no hair conjecture
NASA Technical Reports Server (NTRS)
Maeda, Kei-Ichi; Stein-Schabes, Jaime A.; Futamase, Toshifumi
1988-01-01
The possibility of having inflation in a renormalizable cosmological model is investigated. The Cosmic No Hair Conjecture is proved to hold for all Bianchi types except Bianchi IX. By the use of a conformal transformation on the metric it is shown that these models are equivalent to the ones described by the Einstein-Hilbert action for gravity minimally coupled to a set of scalar fields with inflationary potentials. Henceforth, it is proven that inflationary solutions behave as attractors in solution space, making it a natural event in the evolution of such models.
Ultra High Energy Cosmic Rays: Strangelets?
NASA Astrophysics Data System (ADS)
Xu, Ren-Xin; Wu, Fei
2003-06-01
The conjecture that ultra-high-energy cosmic rays (UHECRs) are actually strangelets is discussed. Besides the reason that strangelets can do as cosmic rays beyond the Greisen-Zatsepin-Kuzmin-cutoff, another argument to support the conjecture is addressed by the study of formation of TeV-scale microscopic black holes when UHECRs bombarding bare strange stars. It is proposed that the exotic quark surface of a bare strange star could be an effective astro-laboratory in the investigations of the extra dimensions and of the detection of ultra-high-energy neutrino fluxes. The flux of neutrinos (and other point-like particles) with energy larger than 2.3×1020 eV could be expected to be smaller than 10-26 cm-2 s-1 if there are two extra spatial dimensions.
Cosmic Censorship for Gowdy Spacetimes.
Ringström, Hans
2010-01-01
Due to the complexity of Einstein's equations, it is often natural to study a question of interest in the framework of a restricted class of solutions. One way to impose a restriction is to consider solutions satisfying a given symmetry condition. There are many possible choices, but the present article is concerned with one particular choice, which we shall refer to as Gowdy symmetry. We begin by explaining the origin and meaning of this symmetry type, which has been used as a simplifying assumption in various contexts, some of which we shall mention. Nevertheless, the subject of interest here is strong cosmic censorship. Consequently, after having described what the Gowdy class of spacetimes is, we describe, as seen from the perspective of a mathematician, what is meant by strong cosmic censorship. The existing results on cosmic censorship are based on a detailed analysis of the asymptotic behavior of solutions. This analysis is in part motivated by conjectures, such as the BKL conjecture, which we shall therefore briefly describe. However, the emphasis of the article is on the mathematical analysis of the asymptotics, due to its central importance in the proof and in the hope that it might be of relevance more generally. The article ends with a description of the results that have been obtained concerning strong cosmic censorship in the class of Gowdy spacetimes.
On cosmic censor in high-energy particle collisions
NASA Astrophysics Data System (ADS)
Miyamoto, Umpei
2011-09-01
In the context of large extra-dimension or TeV-scale gravity scenarios, miniature black holes might be produced in collider experiments. In many works the validity of the cosmic censorship hypothesis has been assumed, which means that there is no chance to observe trans-Planckian phenomena in the experiments since such phenomena are veiled behind the horizons. Here, we argue that "visible borders of spacetime" (as effective naked singularities) would be produced, even dominantly over the black holes, in the collider experiments. Such phenomena will provide us an arena of quantum gravity.
Rapidly moving cosmic strings and chronology protection
NASA Astrophysics Data System (ADS)
Ori, Amos
1991-10-01
Recently, Gott has provided a family of solutions of the Einstein equations describing pairs of parallel cosmic strings in motion. He has shown that if the strings' relative velocity is sufficiently high, there exist closed timelike curves (CTC's) in the spacetime. Here we show that if there are CTC's in such a solution, then every t=const hypersurface in the spacetime intersects CTC's. Therefore, these solutions do not contradict the chronology protection conjecture of Hawking.
End Point of the Ultraspinning Instability and Violation of Cosmic Censorship.
Figueras, Pau; Kunesch, Markus; Lehner, Luis; Tunyasuvunakool, Saran
2017-04-14
We determine the end point of the axisymmetric ultraspinning instability of asymptotically flat Myers-Perry black holes in D=6 spacetime dimensions. In the nonlinear regime, this instability gives rise to a sequence of concentric rings connected by segments of black membrane on the rotation plane. The latter become thinner over time, resulting in the formation of a naked singularity in finite asymptotic time and hence a violation of the weak cosmic censorship conjecture in asymptotically flat higher-dimensional spaces.
End Point of the Ultraspinning Instability and Violation of Cosmic Censorship
NASA Astrophysics Data System (ADS)
Figueras, Pau; Kunesch, Markus; Lehner, Luis; Tunyasuvunakool, Saran
2017-04-01
We determine the end point of the axisymmetric ultraspinning instability of asymptotically flat Myers-Perry black holes in D =6 spacetime dimensions. In the nonlinear regime, this instability gives rise to a sequence of concentric rings connected by segments of black membrane on the rotation plane. The latter become thinner over time, resulting in the formation of a naked singularity in finite asymptotic time and hence a violation of the weak cosmic censorship conjecture in asymptotically flat higher-dimensional spaces.
Quasinormal Modes and Strong Cosmic Censorship.
Cardoso, Vitor; Costa, João L; Destounis, Kyriakos; Hintz, Peter; Jansen, Aron
2018-01-19
The fate of Cauchy horizons, such as those found inside charged black holes, is intrinsically connected to the decay of small perturbations exterior to the event horizon. As such, the validity of the strong cosmic censorship (SCC) conjecture is tied to how effectively the exterior damps fluctuations. Here, we study massless scalar fields in the exterior of Reissner-Nordström-de Sitter black holes. Their decay rates are governed by quasinormal modes of the black hole. We identify three families of modes in these spacetimes: one directly linked to the photon sphere, well described by standard WKB-type tools; another family whose existence and time scale is closely related to the de Sitter horizon; finally, a third family which dominates for near-extremally charged black holes and which is also present in asymptotically flat spacetimes. The last two families of modes seem to have gone unnoticed in the literature. We give a detailed description of linear scalar perturbations of such black holes, and conjecture that SCC is violated in the near extremal regime.
Quasinormal Modes and Strong Cosmic Censorship
NASA Astrophysics Data System (ADS)
Cardoso, Vitor; Costa, João L.; Destounis, Kyriakos; Hintz, Peter; Jansen, Aron
2018-01-01
The fate of Cauchy horizons, such as those found inside charged black holes, is intrinsically connected to the decay of small perturbations exterior to the event horizon. As such, the validity of the strong cosmic censorship (SCC) conjecture is tied to how effectively the exterior damps fluctuations. Here, we study massless scalar fields in the exterior of Reissner-Nordström-de Sitter black holes. Their decay rates are governed by quasinormal modes of the black hole. We identify three families of modes in these spacetimes: one directly linked to the photon sphere, well described by standard WKB-type tools; another family whose existence and time scale is closely related to the de Sitter horizon; finally, a third family which dominates for near-extremally charged black holes and which is also present in asymptotically flat spacetimes. The last two families of modes seem to have gone unnoticed in the literature. We give a detailed description of linear scalar perturbations of such black holes, and conjecture that SCC is violated in the near extremal regime.
Is it really naked? On cosmic censorship in string theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, Andrei V.
We investigate the possibility of cosmic censorship violation in string theory using a characteristic double-null code, which penetrates horizons and is capable of resolving the spacetime all the way to the singularity. We perform high-resolution numerical simulations of the evolution of negative mass initial scalar field profiles, which were argued to provide a counterexample to cosmic censorship conjecture for AdS-asymptotic spacetimes in five-dimensional supergravity. In no instances formation of naked singularity is seen. Instead, numerical evidence indicates that black holes form in the collapse. Our results are consistent with earlier numerical studies, and explicitly show where the 'no black hole'more » argument breaks.« less
Natural inflation and quantum gravity.
de la Fuente, Anton; Saraswat, Prashant; Sundrum, Raman
2015-04-17
Cosmic inflation provides an attractive framework for understanding the early Universe and the cosmic microwave background. It can readily involve energies close to the scale at which quantum gravity effects become important. General considerations of black hole quantum mechanics suggest nontrivial constraints on any effective field theory model of inflation that emerges as a low-energy limit of quantum gravity, in particular, the constraint of the weak gravity conjecture. We show that higher-dimensional gauge and gravitational dynamics can elegantly satisfy these constraints and lead to a viable, theoretically controlled and predictive class of natural inflation models.
A numerical study of Penrose-like inequalities in a family of axially symmetric initial data
NASA Astrophysics Data System (ADS)
Jaramillo, J. L.; Vasset, N.; Ansorg, M.
Our current picture of black hole gravitational collapse relies on two assumptions: i) the resulting singularity is hidden behind an event horizon weak cosmic censorship conjecture and ii) spacetime eventually settles down to a stationarity state. In this setting, it follows that the minimal area containing an apparent horizon is bound by the square of the total ADM mass (Penrose inequality conjecture). Following Dain et al. (2002), we construct numerically a family of axisymmetric initial data with one or several marginally trapped surfaces. Penrose and related geometric inequalities are discused for these data. As a by-product, it is shown how Penrose inequality can be used as a diagnosis for an apparent horizon finder numerical routine.
Cosmic censorship in Lovelock theory
NASA Astrophysics Data System (ADS)
Camanho, Xián O.; Edelstein, José D.
2013-11-01
In analyzing maximally symmetric Lovelock black holes with non-planar horizon topologies, many novel features have been observed. The existence of finite radius singularities, a mass gap in the black hole spectrum and solutions displaying multiple horizons are noteworthy examples. Naively, in all these cases, the appearance of naked singularities seems unavoidable, leading to the question of whether these theories are consistent gravity theories. We address this question and show that whenever the cosmic censorship conjecture is threaten, an instability generically shows up driving the system to a new configuration with presumably no naked singularities. Also, the same kind of instability shows up in the process of spherical black holes evaporation in these theories, suggesting a new phase for their decay. We find circumstantial evidence indicating that, contrary to many claims in the literature, the cosmic censorship hypothesis holds in Lovelock theory.
Thermodynamics with pressure and volume under charged particle absorption
NASA Astrophysics Data System (ADS)
Gwak, Bogeun
2017-11-01
We investigate the variation of the charged anti-de Sitter black hole under charged particle absorption by considering thermodynamic volume. When the energy of the particle is considered to contribute to the internal energy of the black hole, the variation exactly corresponds to the prediction of the first law of thermodynamics. Nevertheless, we find the decrease of the Bekenstein-Hawking entropy for extremal and near-extremal black holes under the absorption, which is an irreversible process. This violation of the second law of thermodynamics is only found when considering thermodynamic volume. We test the weak cosmic censorship conjecture affected by the violation. Fortunately, the conjecture is still valid, but extremal and near-extremal black holes do not change their configurations when any particle enters the black hole. This result is quite different from the case in which thermodynamic volume is not considered.
NASA Astrophysics Data System (ADS)
Connell, P. H.
2017-12-01
The University of Valencia has developed a software simulator LEPTRACK to simulate lepton and photon scattering in any kind of media with a variable density, and permeated by electric/magnetic fields of any geometry, and which can handle an exponential runaway avalanche. Here we show results of simulating the interaction of electrons/positrons/photons in an incoming TeV cosmic ray shower with the kind of electric fields expected in a stormcloud after a CG discharge which removes much of the positive charge build up at the centre of the cloud. The point is to show not just a Relativistic Runaway Electron Avalanche (RREA) above the upper negative shielding layer at 12 km but other gamma ray emission due to electron/positron interaction in the remaining positive charge around 9km and the lower negative charge at 6km altitude. We present here images, lightcurves, altitude profiles, spectra and videos showing the different ionization, excitation and photon density fields produced, their time evolution, and how they depend critically on where the cosmic ray shower beam intercepts the electric field geometry. We also show a new effect of incoming positrons, which make up a significant fraction of the shower, where they appear to "orbit" within the high altitude negative shielding layer, and which has been conjectured to produce significant microwave emission, as well as a short range 511 keV annihilation line. The interesting question is if this conjectured emission can be observed and correlated with TGF orbital observations to prove that a TGF originates in the macro-fields of stormclouds or the micro-fields of light leaders and streamers where this "positron orbiting" is not likely to occur.
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Do supernovae of type 1 paly a role in cosmic-ray production?
NASA Technical Reports Server (NTRS)
Shapiro, M. M.
1985-01-01
A model of cosmic-ray origin is suggested which aims to account for some salient features of the composition. Relative to solar abundances, the Galactic cosmic rays (GCR) are deficient in hydrogen and helim (H and He) by an order of magnitude when the two compositions are normalized at iron. Our conjectural model implicates supernovae of Type I (SN-I) as sources of some of the GCR. SN-I occur approximately as often as SN-II, through their genesis is thought to be different. Recent studies of nucleosynthesis in SN-I based on accreting white dwarfs, find that the elements from Si to Fe are produced copiously. On the other hand, SN-I are virtually devoid of hydrogen, and upper limits deduced for He are low. If SN-I contribute significantly to the pool of GCR by injecting energetic particles into the interstellar medium (ISM), then this could explain why the resulting GCR is relatively deficient in H and He. A test of the model is proposed, and difficulties are discussed.
NASA Astrophysics Data System (ADS)
Hod, Shahar
2018-05-01
The quasinormal resonant modes of massless neutral fields in near-extremal Kerr-Newman-de Sitter black-hole spacetimes are calculated in the eikonal regime. It is explicitly proved that, in the angular momentum regime a bar >√{1 - 2 Λ bar/4 + Λ bar / 3 }, the black-hole spacetimes are characterized by slowly decaying resonant modes which are described by the compact formula ℑ ω (n) =κ+ ṡ (n + 1/2 ) [here the physical parameters { a bar ,κ+ , Λ bar , n } are respectively the dimensionless angular momentum of the black hole, its characteristic surface gravity, the dimensionless cosmological constant of the spacetime, and the integer resonance parameter]. Our results support the validity of the Penrose strong cosmic censorship conjecture in these black-hole spacetimes.
Naked singularities in higher dimensional Vaidya space-times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, S. G.; Dadhich, Naresh
We investigate the end state of the gravitational collapse of a null fluid in higher-dimensional space-times. Both naked singularities and black holes are shown to be developing as the final outcome of the collapse. The naked singularity spectrum in a collapsing Vaidya region (4D) gets covered with the increase in dimensions and hence higher dimensions favor a black hole in comparison to a naked singularity. The cosmic censorship conjecture will be fully respected for a space of infinite dimension.
Can accretion disk properties observationally distinguish black holes from naked singularities?
NASA Astrophysics Data System (ADS)
Kovács, Z.; Harko, T.
2010-12-01
Naked singularities are hypothetical astrophysical objects, characterized by a gravitational singularity without an event horizon. Penrose has proposed a conjecture, according to which there exists a cosmic censor who forbids the occurrence of naked singularities. Distinguishing between astrophysical black holes and naked singularities is a major challenge for present day observational astronomy. In the context of stationary and axially symmetrical geometries, a possibility of differentiating naked singularities from black holes is through the comparative study of thin accretion disks properties around rotating naked singularities and Kerr-type black holes, respectively. In the present paper, we consider accretion disks around axially-symmetric rotating naked singularities, obtained as solutions of the field equations in the Einstein-massless scalar field theory. A first major difference between rotating naked singularities and Kerr black holes is in the frame dragging effect, the angular velocity of a rotating naked singularity being inversely proportional to its spin parameter. Because of the differences in the exterior geometry, the thermodynamic and electromagnetic properties of the disks (energy flux, temperature distribution and equilibrium radiation spectrum) are different for these two classes of compact objects, consequently giving clear observational signatures that could discriminate between black holes and naked singularities. For specific values of the spin parameter and of the scalar charge, the energy flux from the disk around a rotating naked singularity can exceed by several orders of magnitude the flux from the disk of a Kerr black hole. In addition to this, it is also shown that the conversion efficiency of the accreting mass into radiation by rotating naked singularities is always higher than the conversion efficiency for black holes, i.e., naked singularities provide a much more efficient mechanism for converting mass into radiation than black holes. Thus, these observational signatures may provide the necessary tools from clearly distinguishing rotating naked singularities from Kerr-type black holes.
Simulating Terrestrial Gamma Ray Flashes due to cosmic ray shower electrons and positrons
NASA Astrophysics Data System (ADS)
Connell, Paul
2017-04-01
The University of Valencia has developed a software simulator LEPTRACK to simulate the relativistic runaway electron avalanches, RREA, that are presumed to be the cause of Terrestrial Gamma Ray Flashes and their powerful accompanying Ionization/Excitation Flashes. We show here results of LEPTRACK simulations of RREA by the interaction of MeV energy electrons/positrons and photons in cosmic ray showers traversing plausible electric field geometries expected in storm clouds. The input beams of MeV shower products were created using the CORSIKA software package from the Karlsruhe Institute of Technology. We present images, videos and plots showing the different Ionization, Excitation and gamma-ray photon density fields produced, along with their time and spatial profile evolution, which depend critically on where the line of shower particles intercept the electric field geometry. We also show a new effect of incoming positrons in the shower, which make up a significant fraction of shower products, in particular their apparent "orbiting" within a high altitude negative induced shielding charge layer, which has been conjectured to produce a signature microwave emission, as well as a short range 511 keV annihilation line. The interesting question posed is if this conjectured positron emission can be observed and correlated with TGF orbital observations to show if a TGF originates in the macro E-fields of storm clouds or the micro E-fields of lightning leaders where this positron "orbiting" is not likely to occur.
No time machine construction in open 2+1 gravity with timelike total energy-momentum
NASA Astrophysics Data System (ADS)
Tiglio, Manuel H.
1998-09-01
It is shown that in (2+1)-dimensional gravity an open spacetime with timelike sources and total energy momentum cannot have a stable compactly generated Cauchy horizon. This constitutes a proof of a version of Kabat's conjecture and shows, in particular, that not only a Gott time machine cannot be formed from processes such as the decay of a single cosmic string as has been shown by Carroll et al., but that, in a precise sense, a time machine cannot be constructed at all.
Five-dimensional Myers-Perry black holes cannot be overspun in gedanken experiments
NASA Astrophysics Data System (ADS)
An, Jincheng; Shan, Jieru; Zhang, Hongbao; Zhao, Suting
2018-05-01
We apply the new version of a gedanken experiment designed recently by Sorce and Wald to overspin the five-dimensional Myers-Perry black holes. As a result, the extremal black holes cannot be overspun at the linear order. On the other hand, although the nearly extremal black holes could be overspun at the linear order, this process is shown to be prohibited by the quadratic order correction. Thus, no violation of the weak cosmic censorship conjecture occurs around the five-dimensional Myers-Perry black holes.
de Sitter space as a tensor network: Cosmic no-hair, complementarity, and complexity
NASA Astrophysics Data System (ADS)
Bao, Ning; Cao, ChunJun; Carroll, Sean M.; Chatwin-Davies, Aidan
2017-12-01
We investigate the proposed connection between de Sitter spacetime and the multiscale entanglement renormalization ansatz (MERA) tensor network, and ask what can be learned via such a construction. We show that the quantum state obeys a cosmic no-hair theorem: the reduced density operator describing a causal patch of the MERA asymptotes to a fixed point of a quantum channel, just as spacetimes with a positive cosmological constant asymptote to de Sitter space. The MERA is potentially compatible with a weak form of complementarity (local physics only describes single patches at a time, but the overall Hilbert space is infinite dimensional) or, with certain specific modifications to the tensor structure, a strong form (the entire theory describes only a single patch plus its horizon, in a finite-dimensional Hilbert space). We also suggest that de Sitter evolution has an interpretation in terms of circuit complexity, as has been conjectured for anti-de Sitter space.
Strange quark matter fragmentation in astrophysical events
NASA Astrophysics Data System (ADS)
Paulucci, L.; Horvath, J. E.
2014-06-01
The conjecture of Bodmer-Witten-Terazawa suggesting a form of quark matter (Strange Quark Matter) as the ground state of hadronic interactions has been studied in laboratory and astrophysical contexts by a large number of authors. If strange stars exist, some violent events involving these compact objects, such as mergers and even their formation process, might eject some strange matter into the interstellar medium that could be detected as a trace signal in the cosmic ray flux. To evaluate this possibility, it is necessary to understand how this matter in bulk would fragment in the form of strangelets (small lumps of strange quark matter in which finite effects become important). We calculate the mass distribution outcome using the statistical multifragmentation model and point out several caveats affecting it. In particular, the possibility that strangelets fragmentation will render a tiny fraction of contamination in the cosmic ray flux is discussed.
Is patience a virtue? Cosmic censorship of infrared effects in de Sitter
NASA Astrophysics Data System (ADS)
Ferreira, Ricardo Z.; Sandora, Mccullen; Sloth, Martin S.
While the accumulation of long wavelength modes during inflation wreaks havoc on the large scale structure of spacetime, the question of even observability of their presence by any local observer has lead to considerable confusion. Though, it is commonly agreed that infrared effects are not visible to a single sub-horizon observer at late times, we argue that the question is less trivial for a patient observer who has lived long enough to have a record of the state before the soft mode was created. Though classically, there is no obstruction to measuring this effect locally, we give several indications that quantum mechanical uncertainties censor the effect, rendering the observation of long modes ultimately forbidden.
Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M
2018-02-01
Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.
Void asymmetries in the cosmic web: a mechanism for bulk flows
NASA Astrophysics Data System (ADS)
Bland-Hawthorn, J.; Sharma, S.
2016-10-01
Bulk flows of galaxies moving with respect to the cosmic microwave background are well established observationally and seen in the most recent ΛCDM simulations. With the aid of an idealised Gadget-2 simulation, we show that void asymmetries in the cosmic web can exacerbate local bulk flows of galaxies. The {\\it Cosmicflows-2} survey, which has mapped in detail the 3D structure of the Local Universe, reveals that the Local Group resides in a ``local sheet'' of galaxies that borders a ``local void'' with a diameter of about 40 Mpc. The void is emptying out at a rate of 16 km s-1 Mpc-1. In a co-moving frame, the Local Sheet is found to be moving away from the Local Void at ~ 260 km s-1. Our model shows how asymmetric collapse due to unbalanced voids on either side of a developing sheet or wall can lead to a systematic movement of the sheet. We conjectured that asymmetries could lead to a large-scale separation of dark matter and baryons, thereby driving a dependence of galaxy properties with environment, but we do {\\it not} find any evidence for this effect.
NASA Astrophysics Data System (ADS)
Van de Moortel, Maxime
2018-05-01
We show non-linear stability and instability results in spherical symmetry for the interior of a charged black hole—approaching a sub-extremal Reissner-Nordström background fast enough—in presence of a massive and charged scalar field, motivated by the strong cosmic censorship conjecture in that setting: 1. Stability We prove that spherically symmetric characteristic initial data to the Einstein-Maxwell-Klein-Gordon equations approaching a Reissner-Nordström background with a sufficiently decaying polynomial decay rate on the event horizon gives rise to a space-time possessing a Cauchy horizon in a neighbourhood of time-like infinity. Moreover, if the decay is even stronger, we prove that the space-time metric admits a continuous extension to the Cauchy horizon. This generalizes the celebrated stability result of Dafermos for Einstein-Maxwell-real-scalar-field in spherical symmetry. 2. Instability We prove that for the class of space-times considered in the stability part, whose scalar field in addition obeys a polynomial averaged- L 2 (consistent) lower bound on the event horizon, the scalar field obeys an integrated lower bound transversally to the Cauchy horizon. As a consequence we prove that the non-degenerate energy is infinite on any null surface crossing the Cauchy horizon and the curvature of a geodesic vector field blows up at the Cauchy horizon near time-like infinity. This generalizes an instability result due to Luk and Oh for Einstein-Maxwell-real-scalar-field in spherical symmetry. This instability of the black hole interior can also be viewed as a step towards the resolution of the C 2 strong cosmic censorship conjecture for one-ended asymptotically flat initial data.
Cosmic strings and chronology protection
NASA Astrophysics Data System (ADS)
Grant, James D. E.
1993-03-01
A space consisting of two rapidly moving cosmic strings has recently been constructed by Gott that contains closed timelike curves. The global structure of this space is analyzed and it is found that, away from the strings, the space is identical to a generalized Misner space. The vacuum expectation value of the energy-momentum tensor for a conformally coupled scalar field is calculated on this generalized Misner space. It is found to diverge very weakly on the chronology horizon, but more strongly on the polarized hypersurfaces. The divergence on the polarized hypersurfaces is strong enough that when the proper geodesic interval around any polarized hypersurface is of the order of the Planck length squared, the perturbation to the metric caused by the back reaction will be of the order one. Thus we expect the structure of the space will be radically altered by the back reaction before quantum gravitational effects become important. This suggests that Hawking's ``chronology protection conjecture'' holds for spaces with a noncompactly generated chronology horizon.
Cosmic equilibration: A holographic no-hair theorem from the generalized second law
NASA Astrophysics Data System (ADS)
Carroll, Sean M.; Chatwin-Davies, Aidan
2018-02-01
In a wide class of cosmological models, a positive cosmological constant drives cosmological evolution toward an asymptotically de Sitter phase. Here we connect this behavior to the increase of entropy over time, based on the idea that de Sitter spacetime is a maximum-entropy state. We prove a cosmic no-hair theorem for Robertson-Walker and Bianchi I spacetimes that admit a Q-screen ("quantum" holographic screen) with certain entropic properties: If generalized entropy, in the sense of the cosmological version of the generalized second law conjectured by Bousso and Engelhardt, increases up to a finite maximum value along the screen, then the spacetime is asymptotically de Sitter in the future. Moreover, the limiting value of generalized entropy coincides with the de Sitter horizon entropy. We do not use the Einstein field equations in our proof, nor do we assume the existence of a positive cosmological constant. As such, asymptotic relaxation to a de Sitter phase can, in a precise sense, be thought of as cosmological equilibration.
Fenwick, Elisabeth; Marshall, Deborah A; Blackhouse, Gordon; Vidaillet, Humberto; Slee, April; Shemanski, Lynn; Levy, Adrian R
2008-01-01
Losses to follow-up and administrative censoring can cloud the interpretation of trial-based economic evaluations. A number of investigators have examined the impact of different levels of adjustment for censoring, including nonadjustment, adjustment of effects only, and adjustment for both costs and effects. Nevertheless, there is a lack of research on the impact of censoring on decision-making. The objective of this study was to estimate the impact of adjustment for censoring on the interpretation of cost-effectiveness results and expected value of perfect information (EVPI), using a trial-based analysis that compared rate- and rhythm-control treatments for persons with atrial fibrillation. Three different levels of adjustment for censoring were examined: no censoring of cost and effects, censoring of effects only, and censoring of both costs and effects. In each case, bootstrapping was used to estimate the uncertainty incosts and effects, and the EVPI was calculated to determine the potential worth of further research. Censoring did not impact the adoption decision. Nevertheless, this was not the case for the decision uncertainty or the EVPI. For a threshold of $50,000 per life-year, the EVPI varied between $626,000 (partial censoring) to $117 million (full censoring) for the eligible US population. The level of adjustment for censoring in trial-based cost-effectiveness analyses can impact on the decisions to fund a new technology and to devote resources for further research. Only when censoring is taken into account for both costs and effects are these decisions appropriately addressed.
Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi
2014-04-01
Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.
Implications for the missing low-mass galaxies (satellites) problem from cosmic shear
NASA Astrophysics Data System (ADS)
Jimenez, Raul; Verde, Licia; Kitching, Thomas D.
2018-06-01
The number of observed dwarf galaxies, with dark matter mass ≲ 1011 M⊙ in the Milky Way or the Andromeda galaxy does not agree with predictions from the successful ΛCDM paradigm. To alleviate this problem a suppression of dark matter clustering power on very small scales has been conjectured. However, the abundance of dark matter halos outside our immediate neighbourhood (the Local Group) seem to agree with the ΛCDM-expected abundance. Here we connect these problems to observations of weak lensing cosmic shear, pointing out that cosmic shear can make significant statements about the missing satellites problem in a statistical way. As an example and pedagogical application we use recent constraints on small-scales power suppression from measurements of the CFHTLenS data. We find that, on average, in a region of ˜Gpc3 there is no significant small-scale power suppression. This implies that suppression of small-scale power is not a viable solution to the `missing satellites problem' or, alternatively, that on average in this volume there is no `missing satellites problem' for dark matter masses ≳ 5 × 109 M⊙. Further analysis of current and future weak lensing surveys will probe much smaller scales, k > 10h Mpc-1 corresponding roughly to masses M < 109M⊙.
Black Hole Spin Evolution and Cosmic Censorship
NASA Astrophysics Data System (ADS)
Chen, W.; Cui, W.; Zhang, S. N.
1999-04-01
We show that the accretion process in X-ray binaries is not likely to spin up or spin down the accreting black holes due to the short lifetime of the system or the lack of sufficient mass supply from the donor star. Therefore, the black hole mass and spin distribution we observe today also reflects that at birth and places interesting constraints on the supernova explosion models across the mass spectrum. On the other hand, it has long been puzzled that accretion from a Keplerian accretion disk with large enough mass supply might spin up the black hole to extremity, thus violate Penrose's cosmic censorship conjecture and the third law of black hole dynamics. This prompted Thorne to propose an astrophysical solution which caps the maximum attainable black hole spin to a value slightly below unity. We show that the black hole will never reach extreme Kerr state under any circumstances by accreting Keplerian angular momentum from the last stable orbit and the cosmic censorship will always be upheld. The maximum black hole spin which can be reached for a fixed, astrophysically meaningful accretion rate is, however, very close to unity, thus the peak spin rate of black holes one can hope to observe from Nature is still 0.998, the Thorne limit.
Orientation of cosmic web filaments with respect to the underlying velocity field
NASA Astrophysics Data System (ADS)
Tempel, E.; Libeskind, N. I.; Hoffman, Y.; Liivamägi, L. J.; Tamm, A.
2014-01-01
The large-scale structure of the Universe is characterized by a web-like structure made of voids, sheets, filaments and knots. The structure of this so-called cosmic web is dictated by the local velocity shear tensor. In particular, the local direction of a filament should be strongly aligned with hat{e}_3, the eigenvector associated with the smallest eigenvalue of the tensor. That conjecture is tested here on the basis of a cosmological simulation. The cosmic web delineated by the halo distribution is probed by a marked point process with interactions (the Bisous model), detecting filaments directly from the halo distribution (P-web). The detected P-web filaments are found to be strongly aligned with the local hat{e}_3: the alignment is within 30° for ˜80 per cent of the elements. This indicates that large-scale filaments defined purely from the distribution of haloes carry more than just morphological information, although the Bisous model does not make any prior assumption on the underlying shear tensor. The P-web filaments are also compared to the structure revealed from the velocity shear tensor itself (V-web). In the densest regions, the P- and V-web filaments overlap well (90 per cent), whereas in lower density regions, the P-web filaments preferentially mark sheets in the V-web.
NASA Astrophysics Data System (ADS)
Magnon, Anne
2005-04-01
A non geometric cosmology is presented, based on logic of observability, where logical categories of our perception set frontiers to comprehensibility. The Big-Bang singularity finds here a substitute (comparable to a "quantum jump"): a logical process (tied to self-referent and divisible totality) by which information emerges, focalizes on events and recycles, providing a transition from incoherence to causal coherence. This jump manufactures causal order and space-time localization, as exact solutions to Einstein's equation, where the last step of the process disentangles complex Riemann spheres into real null-cones (a geometric overturning imposed by self-reference, reminding us of our ability to project the cosmos within our mental sphere). Concepts such as antimatter and dark energy (dual entities tied to bifurcations or broken symmetries, and their compensation), are presented as hidden in the virtual potentialities, while irreversible time appears with the recycling of information and related flow. Logical bifurcations (such as the "part-totality" category, a quantum of information which owes its recycling to non localizable logical separations, as anticipated by unstability or horizon dependence of the quantum vacuum) induce broken symmetries, at the (complex or real) geometric level [eg. the antiselfdual complex non linear graviton solutions, which break duality symmetry, provide a model for (hidden) anti-matter, itself compensated with dark-energy, and providing, with space-time localization, the radiative gravitational energy (Bondi flux and related bifurcations of the peeling off type), as well as mass of isolated bodies]. These bifurcations are compensated by inertial effects (non geometric precursors of the Coriolis forces) able to explain (on logical grounds) the cosmic expansion (a repulsion?) and critical equilibrium of the cosmic tissue. Space-time environment, itself, emerges through the jump, as a censor to totality, a screen to incoherence (as anticipated by black-hole event horizons, cosmic censors able to shelter causal geometry). In analogy with black-hole singularities, the Big-Bang can be viewed as a geometric hint that a transition from incoherence to (causal space-time) localization and related coherence (comprehensibility), is taking place (space-time demolition, a reverse process towards incoherence or information recycling, is expected in the vicinity of singularities, as hinted by black-holes and related "time-machines"). A theory of the emergence of perception (and life?), in connection with observability and the function of partition (able to screen totality), is on its way [interface incoherence-coherence, sleeping and awaking states of localization, horizons of perception etc, are anticipated by black-hole event horizons, beyond which a non causal, dimensionless incoherent regime or memorization process, presents itself with the loss of localization, suggesting a unifying regime (ultimate energies?) hidden in cosmic potentialities]. The decoherence process presented here, suggests an ultimate interaction, expression of the logical relation of subsystems to totality, and to be identified to the flow of information or its recycling through cosmic jump (this is anticipated by the dissipation of distance or hierarchies on null-cones, themselves recycled with information and events). The geometric projection of this unified irreversible dynamics is expressed by unified Yang-Mills field equations (coupled to Einsteinian gravity). An ultimate form of action ("set"-volumes of information) presents itself, whose extrema can be achieved through extremal transfer of information and related partition of cells of information (thus anticipating the mitosis of living cells, possibly triggered at the non localizable level, as imposed by the logical regime of cosmic decoherence: participating subsystems ?). The matching of the objective and subjective facets of (information and) decoherences is perceived as contact with a reality.
Effect of censoring trace-level water-quality data on trend-detection capability
Gilliom, R.J.; Hirsch, R.M.; Gilroy, E.J.
1984-01-01
Monte Carlo experiments were used to evaluate whether trace-level water-quality data that are routinely censored (not reported) contain valuable information for trend detection. Measurements are commonly censored if they fall below a level associated with some minimum acceptable level of reliability (detection limit). Trace-level organic data were simulated with best- and worst-case estimates of measurement uncertainty, various concentrations and degrees of linear trend, and different censoring rules. The resulting classes of data were subjected to a nonparametric statistical test for trend. For all classes of data evaluated, trends were most effectively detected in uncensored data as compared to censored data even when the data censored were highly unreliable. Thus, censoring data at any concentration level may eliminate valuable information. Whether or not valuable information for trend analysis is, in fact, eliminated by censoring of actual rather than simulated data depends on whether the analytical process is in statistical control and bias is predictable for a particular type of chemical analyses.
The Censored Mean-Level Detector for Multiple Target Environments.
1984-03-01
rate ( CFAR ) detectors known as censored mean-level detectors ( CMLD ). The CMLD , a special case of which is the mean-level detector (or zell-averaged...detectors known as censored mean- level detectors ( CMLD ). The CMLD , a special case of which is the mean-level detector (or cell-averaged CFAR detector), is...CENSORED MEAN-LEVEL DETECTOR The censored mean-level detector ( CMLD ) is a generalization of the traditional mean-level detector (MLD) or cell-averaged CFAR
Antweiler, Ronald C.; Taylor, Howard E.
2008-01-01
The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.
Anisotropic power-law inflation for a conformal-violating Maxwell model
NASA Astrophysics Data System (ADS)
Do, Tuan Q.; Kao, W. F.
2018-05-01
A set of power-law solutions of a conformal-violating Maxwell model with a non-standard scalar-vector coupling will be shown in this paper. In particular, we are interested in a coupling term of the form X^{2n} F^{μ ν }F_{μ ν } with X denoting the kinetic term of the scalar field. Stability analysis indicates that the new set of anisotropic power-law solutions is unstable during the inflationary phase. The result is consistent with the cosmic no-hair conjecture. We show, however, that a set of stable slowly expanding solutions does exist for a small range of parameters λ and n. Hence a small anisotropy can survive during the slowly expanding phase.
On holographic entanglement entropy with second order excitations
NASA Astrophysics Data System (ADS)
He, Song; Sun, Jia-Rui; Zhang, Hai-Qing
2018-03-01
We study the low-energy corrections to the holographic entanglement entropy (HEE) in the boundary CFT by perturbing the bulk geometry up to second order excitations. Focusing on the case that the boundary subsystem is a strip, we show that the area of the bulk minimal surface can be expanded in terms of the conserved charges, such as mass, angular momentum and electric charge of the AdS black brane. We also calculate the variation of the energy in the subsystem and verify the validity of the first law-like relation of thermodynamics at second order. Moreover, the HEE is naturally bounded at second order perturbations if the cosmic censorship conjecture for the dual black hole still holds.
Thermodynamic properties of Kehagias-Sfetsos black hole and KS/CFT correspondence
NASA Astrophysics Data System (ADS)
Pradhan, Parthapratim
2017-11-01
We speculate on various thermodynamic features of the inner horizon ({\\mathcal H}-) and outer horizons ({\\mathcal H}+) of Kehagias-Sfetsos (KS) black hole (BH) in the background of the Hořava-Lifshitz gravity. We compute particularly the area product, area sum, area minus and area division of the BH horizons. We find that they all are not showing universal behavior whereas the product is a universal quantity (PRADHAN P., Phys. Lett. B, 747 (2015) 64). Based on these relations, we derive the area bound of all horizons. From the area bound we derive the entropy bound and irreducible mass bound for all the horizons ({\\mathcal H}+/-) . We also observe that the first law of BH thermodynamics and Smarr-Gibbs-Duhem relations do not hold for this BH. The underlying reason behind this failure is due to the scale invariance of the coupling constant. Moreover, we compute the Cosmic-Censorship-Inequality for this BH which gives the lower bound for the total mass of the spacetime and it is supported by the cosmic cencorship conjecture. Finally, we discuss the KS/CFT correspondence via a thermodynamic procedure.
Estimating and Testing Mediation Effects with Censored Data
ERIC Educational Resources Information Center
Wang, Lijuan; Zhang, Zhiyong
2011-01-01
This study investigated influences of censored data on mediation analysis. Mediation effect estimates can be biased and inefficient with censoring on any one of the input, mediation, and output variables. A Bayesian Tobit approach was introduced to estimate and test mediation effects with censored data. Simulation results showed that the Bayesian…
Griffiths, Robert I; Gleeson, Michelle L; Danese, Mark D; O'Hagan, Anthony
2012-01-01
To assess the accuracy and precision of inverse probability weighted (IPW) least squares regression analysis for censored cost data. By using Surveillance, Epidemiology, and End Results-Medicare, we identified 1500 breast cancer patients who died and had complete cost information within the database. Patients were followed for up to 48 months (partitions) after diagnosis, and their actual total cost was calculated in each partition. We then simulated patterns of administrative and dropout censoring and also added censoring to patients receiving chemotherapy to simulate comparing a newer to older intervention. For each censoring simulation, we performed 1000 IPW regression analyses (bootstrap, sampling with replacement), calculated the average value of each coefficient in each partition, and summed the coefficients for each regression parameter to obtain the cumulative values from 1 to 48 months. The cumulative, 48-month, average cost was $67,796 (95% confidence interval [CI] $58,454-$78,291) with no censoring, $66,313 (95% CI $54,975-$80,074) with administrative censoring, and $66,765 (95% CI $54,510-$81,843) with administrative plus dropout censoring. In multivariate analysis, chemotherapy was associated with increased cost of $25,325 (95% CI $17,549-$32,827) compared with $28,937 (95% CI $20,510-$37,088) with administrative censoring and $29,593 ($20,564-$39,399) with administrative plus dropout censoring. Adding censoring to the chemotherapy group resulted in less accurate IPW estimates. This was ameliorated, however, by applying IPW within treatment groups. IPW is a consistent estimator of population mean costs if the weight is correctly specified. If the censoring distribution depends on some covariates, a model that accommodates this dependency must be correctly specified in IPW to obtain accurate estimates. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Impact of censoring on learning Bayesian networks in survival modelling.
Stajduhar, Ivan; Dalbelo-Basić, Bojana; Bogunović, Nikola
2009-11-01
Bayesian networks are commonly used for presenting uncertainty and covariate interactions in an easily interpretable way. Because of their efficient inference and ability to represent causal relationships, they are an excellent choice for medical decision support systems in diagnosis, treatment, and prognosis. Although good procedures for learning Bayesian networks from data have been defined, their performance in learning from censored survival data has not been widely studied. In this paper, we explore how to use these procedures to learn about possible interactions between prognostic factors and their influence on the variate of interest. We study how censoring affects the probability of learning correct Bayesian network structures. Additionally, we analyse the potential usefulness of the learnt models for predicting the time-independent probability of an event of interest. We analysed the influence of censoring with a simulation on synthetic data sampled from randomly generated Bayesian networks. We used two well-known methods for learning Bayesian networks from data: a constraint-based method and a score-based method. We compared the performance of each method under different levels of censoring to those of the naive Bayes classifier and the proportional hazards model. We did additional experiments on several datasets from real-world medical domains. The machine-learning methods treated censored cases in the data as event-free. We report and compare results for several commonly used model evaluation metrics. On average, the proportional hazards method outperformed other methods in most censoring setups. As part of the simulation study, we also analysed structural similarities of the learnt networks. Heavy censoring, as opposed to no censoring, produces up to a 5% surplus and up to 10% missing total arcs. It also produces up to 50% missing arcs that should originally be connected to the variate of interest. Presented methods for learning Bayesian networks from data can be used to learn from censored survival data in the presence of light censoring (up to 20%) by treating censored cases as event-free. Given intermediate or heavy censoring, the learnt models become tuned to the majority class and would thus require a different approach.
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029
Antweiler, Ronald C.
2015-01-01
The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.
Small values in big data: The continuing need for appropriate metadata
Stow, Craig A.; Webster, Katherine E.; Wagner, Tyler; Lottig, Noah R.; Soranno, Patricia A.; Cha, YoonKyung
2018-01-01
Compiling data from disparate sources to address pressing ecological issues is increasingly common. Many ecological datasets contain left-censored data – observations below an analytical detection limit. Studies from single and typically small datasets show that common approaches for handling censored data — e.g., deletion or substituting fixed values — result in systematic biases. However, no studies have explored the degree to which the documentation and presence of censored data influence outcomes from large, multi-sourced datasets. We describe left-censored data in a lake water quality database assembled from 74 sources and illustrate the challenges of dealing with small values in big data, including detection limits that are absent, range widely, and show trends over time. We show that substitutions of censored data can also bias analyses using ‘big data’ datasets, that censored data can be effectively handled with modern quantitative approaches, but that such approaches rely on accurate metadata that describe treatment of censored data from each source.
Simulation of parametric model towards the fixed covariate of right censored lung cancer data
NASA Astrophysics Data System (ADS)
Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila
2017-09-01
In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.
Geodesic-light-cone coordinates and the Bianchi I spacetime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleury, Pierre; Nugier, Fabien; Fanizza, Giuseppe, E-mail: pierre.fleury@uct.ac.za, E-mail: fnugier@ntu.edu.tw, E-mail: giuseppe.fanizza@ba.infn.it
The geodesic-light-cone (GLC) coordinates are a useful tool to analyse light propagation and observations in cosmological models. In this article, we propose a detailed, pedagogical, and rigorous introduction to this coordinate system, explore its gauge degrees of freedom, and emphasize its interest when geometric optics is at stake. We then apply the GLC formalism to the homogeneous and anisotropic Bianchi I cosmology. More than a simple illustration, this application (i) allows us to show that the Weinberg conjecture according to which gravitational lensing does not affect the proper area of constant-redshift surfaces is significantly violated in a globally anisotropic universe;more » and (ii) offers a glimpse into new ways to constrain cosmic isotropy from the Hubble diagram.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goswami, Rituparno; Joshi, Pankaj S.; Vaz, Cenalo
We construct a class of spherically symmetric collapse models in which a naked singularity may develop as the end state of collapse. The matter distribution considered has negative radial and tangential pressures, but the weak energy condition is obeyed throughout. The singularity forms at the center of the collapsing cloud and continues to be visible for a finite time. The duration of visibility depends on the nature of energy distribution. Hence the causal structure of the resulting singularity depends on the nature of the mass function chosen for the cloud. We present a general model in which the naked singularitymore » formed is timelike, neither pointlike nor null. Our work represents a step toward clarifying the necessary conditions for the validity of the Cosmic Censorship Conjecture.« less
Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2017-01-01
The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study.
Wang, Peijie; Zhao, Hui; Sun, Jianguo
2016-12-01
Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.
Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang
2013-01-01
Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.
NASA Astrophysics Data System (ADS)
Davias, M. E.; Harris, T. H. S.
2017-12-01
80 years after aerial photography revealed thousands of aligned oval depressions on the USA's Atlantic Coastal Plain, the geomorphology of the "Carolina bays" remains enigmatic. Geologists and astronomers alike hold that invoking a cosmic impact for their genesis is indefensible. Rather, the bays are commonly attributed to gradualistic fluvial, marine and/or aeolian processes operating during the Pleistocene era. The major axis orientations of Carolina bays are noted for varying statistically by latitude, suggesting that, should there be any merit to a cosmic hypothesis, a highly accurate triangulation network and suborbital analysis would yield a locus and allow for identification of a putative impact site. Digital elevation maps using LiDAR technology offer the precision necessary to measure their exquisitely-carved circumferential rims and orientations reliably. To support a comprehensive geospatial survey of Carolina bay landforms (Survey) we generated about a million km2 of false-color hsv-shaded bare-earth topographic maps as KML-JPEG tile sets for visualization on virtual globes. Considering the evidence contained in the Survey, we maintain that interdisciplinary research into a possible cosmic origin should be encouraged. Consensus opinion does hold a cosmic impact accountable for an enigmatic Pleistocene event - the Australasian tektite strewn field - despite the failure of a 60-year search to locate the causal astroblem. Ironically, a cosmic link to the Carolina bays is considered soundly falsified by the identical lack of a causal impact structure. Our conjecture suggests both these events are coeval with a cosmic impact into the Great Lakes area during the Mid-Pleistocene Transition, at 786 ka ± 5 k. All Survey data and imagery produced for the Survey are available on the Internet to support independent research. A table of metrics for 50,000 bays examined for the Survey is available from an on-line Google Fusion Table: https://goo.gl/XTHKC4 . Each bay is also geospatially referenceable through a map containing clickable placemarks that provide information windows displaying that bay's measurements as well as further links which allows visualization of the associated LiDAR imagery and the bay's planform measurement overlay within the Google Earth virtual globe: https://goo.gl/EHR4Lf .
Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2016-01-01
Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.
Quantum Backreaction on Three-Dimensional Black Holes and Naked Singularities.
Casals, Marc; Fabbri, Alessandro; Martínez, Cristián; Zanelli, Jorge
2017-03-31
We analytically investigate backreaction by a quantum scalar field on two rotating Bañados-Teitelboim-Zanelli (BTZ) geometries: that of a black hole and that of a naked singularity. In the former case, we explore the quantum effects on various regions of relevance for a rotating black hole space-time. We find that the quantum effects lead to a growth of both the event horizon and the radius of the ergosphere, and to a reduction of the angular velocity, compared to the unperturbed values. Furthermore, they give rise to the formation of a curvature singularity at the Cauchy horizon and show no evidence of the appearance of a superradiant instability. In the case of a naked singularity, we find that quantum effects lead to the formation of a horizon that shields it, thus supporting evidence for the rôle of quantum mechanics as a cosmic censor in nature.
A random-censoring Poisson model for underreported data.
de Oliveira, Guilherme Lopes; Loschi, Rosangela Helena; Assunção, Renato Martins
2017-12-30
A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random-censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor. Copyright © 2017 John Wiley & Sons, Ltd.
Censored quantile regression with recursive partitioning-based weights
Wey, Andrew; Wang, Lan; Rudser, Kyle
2014-01-01
Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800
Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)
NASA Astrophysics Data System (ADS)
Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi
2017-06-01
Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.
Some insight on censored cost estimators.
Zhao, H; Cheng, Y; Bang, H
2011-08-30
Censored survival data analysis has been studied for many years. Yet, the analysis of censored mark variables, such as medical cost, quality-adjusted lifetime, and repeated events, faces a unique challenge that makes standard survival analysis techniques invalid. Because of the 'informative' censorship imbedded in censored mark variables, the use of the Kaplan-Meier (Journal of the American Statistical Association 1958; 53:457-481) estimator, as an example, will produce biased estimates. Innovative estimators have been developed in the past decade in order to handle this issue. Even though consistent estimators have been proposed, the formulations and interpretations of some estimators are less intuitive to practitioners. On the other hand, more intuitive estimators have been proposed, but their mathematical properties have not been established. In this paper, we prove the analytic identity between some estimators (a statistically motivated estimator and an intuitive estimator) for censored cost data. Efron (1967) made similar investigation for censored survival data (between the Kaplan-Meier estimator and the redistribute-to-the-right algorithm). Therefore, we view our study as an extension of Efron's work to informatively censored data so that our findings could be applied to other marked variables. Copyright © 2011 John Wiley & Sons, Ltd.
Hagar, Yolanda C; Harvey, Danielle J; Beckett, Laurel A
2016-08-30
We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left-censored and right-censored, and some individuals are never screened (the 'cured' population). We propose a multivariate parametric cure model that can be used with left-censored and right-censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within-subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER-Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Multiverse effects on the CMB angular correlation function in the framework of NCG
NASA Astrophysics Data System (ADS)
Arabzadeh, Sahar; Kaviani, Kamran
Following many theories that predict the existence of the multiverse and by conjecture that our space-time may have a generalized geometrical structure at the fundamental level, we are interested in using a non-commutative geometry (NCG) formalism to study a suggested two-layer space that contains our 4-dimensional (4D) universe and a re-derived photon propagator. It can be shown that the photon propagator and a cosmic microwave background (CMB) angular correlation function are comparable, and if there exists such a multiverse system, the distance between the two layers can be estimated to be within the observable universe’s radius. Furthermore, this study revealed that our results are not limited to CMB but can be applied to many other types of radiation, such as X-rays.
Muggli, Monique E; Hurt, Richard D
2004-08-01
A review of internal tobacco company documents showed that the tobacco company Philip Morris and the insurance company CIGNA collaborated to censor accurate information on the harm of smoking and on environmental tobacco smoke exposure from CIGNA health newsletters sent to employees of Philip Morris and its affiliates. From 1996 to 1998, 5 of the 8 CIGNA newsletters discussed in the internal tobacco documents were censored.We recommend that accrediting bodies mandate that health plans not censor employee-directed health information at the request of employers.
SEMIPARAMETRIC EFFICIENT ESTIMATION FOR SHARED-FRAILTY MODELS WITH DOUBLY-CENSORED CLUSTERED DATA
Wang, Jane-Ling
2018-01-01
In this paper, we investigate frailty models for clustered survival data that are subject to both left- and right-censoring, termed “doubly-censored data”. This model extends current survival literature by broadening the application of frailty models from right-censoring to a more complicated situation with additional left censoring. Our approach is motivated by a recent Hepatitis B study where the sample consists of families. We adopt a likelihood approach that aims at the nonparametric maximum likelihood estimators (NPMLE). A new algorithm is proposed, which not only works well for clustered data but also improve over existing algorithm for independent and doubly-censored data, a special case when the frailty variable is a constant equal to one. This special case is well known to be a computational challenge due to the left censoring feature of the data. The new algorithm not only resolves this challenge but also accommodate the additional frailty variable effectively. Asymptotic properties of the NPMLE are established along with semi-parametric efficiency of the NPMLE for the finite-dimensional parameters. The consistency of Bootstrap estimators for the standard errors of the NPMLE is also discussed. We conducted some simulations to illustrate the numerical performance and robustness of the proposed algorithm, which is also applied to the Hepatitis B data. PMID:29527068
An identifiable model for informative censoring
Link, W.A.; Wegman, E.J.; Gantz, D.T.; Miller, J.J.
1988-01-01
The usual model for censored survival analysis requires the assumption that censoring of observations arises only due to causes unrelated to the lifetime under consideration. It is easy to envision situations in which this assumption is unwarranted, and in which use of the Kaplan-Meier estimator and associated techniques will lead to unreliable analyses.
Hsu, Chiu-Hsieh; Li, Yisheng; Long, Qi; Zhao, Qiuhong; Lance, Peter
2011-01-01
In colorectal polyp prevention trials, estimation of the rate of recurrence of adenomas at the end of the trial may be complicated by dependent censoring, that is, time to follow-up colonoscopy and dropout may be dependent on time to recurrence. Assuming that the auxiliary variables capture the dependence between recurrence and censoring times, we propose to fit two working models with the auxiliary variables as covariates to define risk groups and then extend an existing weighted logistic regression method for independent censoring to each risk group to accommodate potential dependent censoring. In a simulation study, we show that the proposed method results in both a gain in efficiency and reduction in bias for estimating the recurrence rate. We illustrate the methodology by analyzing a recurrent adenoma dataset from a colorectal polyp prevention trial. PMID:22065985
Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.
Kim, Yuneung; Lim, Johan; Park, DoHwan
2015-11-01
In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sieve estimation in a Markov illness-death process under dual censoring.
Boruvka, Audrey; Cook, Richard J
2016-04-01
Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Topics in Gravitation and Cosmology
NASA Astrophysics Data System (ADS)
Bahrami Taghanaki, Sina
This thesis is focused on two topics in which relativistic gravitational fields play an important role, namely early Universe cosmology and black hole physics. The theory of cosmic inflation has emerged as the most successful theory of the very early Universe with concrete and verifiable predictions for the properties of anisotropies of the cosmic microwave background radiation and large scale structure. Coalescences of black hole binaries have recently been detected by the Laser Interferometer Gravitational Wave Observatory (LIGO), opening a new arena for observationally testing the dynamics of gravity. In part I of this thesis we explore some modifications to the standard theory of inflation. The main predictions of single field slow-roll inflation have been largely consistent with cosmological observations. However, there remain some aspects of the theory that are not presently well understood. Among these are the somewhat interrelated issues of the choice of initial state for perturbations and the potential imprints of pre-inflationary dynamics. It is well known that a key prediction of the standard theory of inflation, namely the Gaussianity of perturbations, is a consequence of choosing a natural vacuum initial state. In chapter 3, we study the generation and detectability of non-Gaussianities in inflationary scalar perturbations that originate from more general choices of initial state. After that, in chapter 4, we study a simple but predictive model of pre-inflationary dynamics in an attempt to test the robustness of inflationary predictions. We find that significant deviations from the standard predictions are unlikely to result from models in which the inflaton field decouples from the pre-inflationary degrees of freedom prior to freeze-out of the observable modes. In part II we turn to a study of an aspect of the thermodynamics of black holes, a subject which has led to important advances in our understanding of quantum gravity. For objects which collapse to form black holes, we examine a conjectured relationship between the objects' entropy, the collapse timescale, and the mass of the final black hole. This relationship is relevant for understanding the nature of generic quantum mechanical states of black hole interiors. In chapter 6 we construct a counter-example to a weak version of the conjectured relation.
The concordance index C and the Mann-Whitney parameter Pr(X>Y) with randomly censored data.
Koziol, James A; Jia, Zhenyu
2009-06-01
Harrell's c-index or concordance C has been widely used as a measure of separation of two survival distributions. In the absence of censored data, the c-index estimates the Mann-Whitney parameter Pr(X>Y), which has been repeatedly utilized in various statistical contexts. In the presence of randomly censored data, the c-index no longer estimates Pr(X>Y); rather, a parameter that involves the underlying censoring distributions. This is in contrast to Efron's maximum likelihood estimator of the Mann-Whitney parameter, which is recommended in the setting of random censorship.
Threshold regression to accommodate a censored covariate.
Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A
2018-06-22
In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.
Twenty five years long survival analysis of an individual shortleaf pine trees
Pradip Saud; Thomas B. Lynch; James M. Guldin
2016-01-01
A semi parametric cox proportion hazard model is preferred when censored data and survival time information is available (Kleinbaum and Klein 1996; Alison 2010). Censored data are observations that have incomplete information related to survival time or event time of interest. In repeated forest measurements, usually observations are either right censored or...
Evaluation of methods for managing censored results when calculating the geometric mean.
Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M
2018-01-01
Currently, there are conflicting views on the best statistical methods for managing censored environmental data. The method commonly applied by environmental science researchers and professionals is to substitute half the limit of reporting for derivation of summary statistics. This approach has been criticised by some researchers, raising questions around the interpretation of historical scientific data. This study evaluated four complete soil datasets, at three levels of simulated censorship, to test the accuracy of a range of censored data management methods for calculation of the geometric mean. The methods assessed included removal of censored results, substitution of a fixed value (near zero, half the limit of reporting and the limit of reporting), substitution by nearest neighbour imputation, maximum likelihood estimation, regression on order substitution and Kaplan-Meier/survival analysis. This is the first time such a comprehensive range of censored data management methods have been applied to assess the accuracy of calculation of the geometric mean. The results of this study show that, for describing the geometric mean, the simple method of substitution of half the limit of reporting is comparable or more accurate than alternative censored data management methods, including nearest neighbour imputation methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Riantana, R.; Darsono, D.; Triyono, A.; Azimut, H. B.
2016-11-01
Calibration of the android censor was done by placing the device in a mounting at side of accelerograph TDL 303 QS that will be a means of comparison. Leveling of both devices was set same, so that the state of the device can be assumed same anyway. Then applied vibrations in order to have the maximum amplitude value of both censor, so it can be found equality of the coefficient of proportionality both of them. The results on both devices obtain the Peak Ground Acceleration (PGA) as follows, on the x axis (EW) android censor is obtained PGA -2.4478145 gal than at TDL 303 QS obtained PGA -2.5504 gal, the y-axis (NS) on the censor android obtained PGA 3.0066964 gal than at TDL 303 QS obtained PGA 3.2073 gal, the z-axis (UD) on the android censor obtained PGA -14.0702377 gal than at TDL 303 QS obtained PGA -13.2927 gal, A correction value for android accelerometer censor is ± 0.1 gal for the x-axis (EW), ± 0.2 gal for the y-axis (NS), and ± 0.7 gal for the z-axis (UD).
Self-organization of cosmic radiation pressure instability. II - One-dimensional simulations
NASA Technical Reports Server (NTRS)
Hogan, Craig J.; Woods, Jorden
1992-01-01
The clustering of statistically uniform discrete absorbing particles moving solely under the influence of radiation pressure from uniformly distributed emitters is studied in a simple one-dimensional model. Radiation pressure tends to amplify statistical clustering in the absorbers; the absorbing material is swept into empty bubbles, the biggest bubbles grow bigger almost as they would in a uniform medium, and the smaller ones get crushed and disappear. Numerical simulations of a one-dimensional system are used to support the conjecture that the system is self-organizing. Simple statistics indicate that a wide range of initial conditions produce structure approaching the same self-similar statistical distribution, whose scaling properties follow those of the attractor solution for an isolated bubble. The importance of the process for large-scale structuring of the interstellar medium is briefly discussed.
Lee, MinJae; Rahbar, Mohammad H; Talebi, Hooshang
2018-01-01
We propose a nonparametric test for interactions when we are concerned with investigation of the simultaneous effects of two or more factors in a median regression model with right censored survival data. Our approach is developed to detect interaction in special situations, when the covariates have a finite number of levels with a limited number of observations in each level, and it allows varying levels of variance and censorship at different levels of the covariates. Through simulation studies, we compare the power of detecting an interaction between the study group variable and a covariate using our proposed procedure with that of the Cox Proportional Hazard (PH) model and censored quantile regression model. We also assess the impact of censoring rate and type on the standard error of the estimators of parameters. Finally, we illustrate application of our proposed method to real life data from Prospective Observational Multicenter Major Trauma Transfusion (PROMMTT) study to test an interaction effect between type of injury and study sites using median time for a trauma patient to receive three units of red blood cells. The results from simulation studies indicate that our procedure performs better than both Cox PH model and censored quantile regression model based on statistical power for detecting the interaction, especially when the number of observations is small. It is also relatively less sensitive to censoring rates or even the presence of conditionally independent censoring that is conditional on the levels of covariates.
Two-sample tests and one-way MANOVA for multivariate biomarker data with nondetects.
Thulin, M
2016-09-10
Testing whether the mean vector of a multivariate set of biomarkers differs between several populations is an increasingly common problem in medical research. Biomarker data is often left censored because some measurements fall below the laboratory's detection limit. We investigate how such censoring affects multivariate two-sample and one-way multivariate analysis of variance tests. Type I error rates, power and robustness to increasing censoring are studied, under both normality and non-normality. Parametric tests are found to perform better than non-parametric alternatives, indicating that the current recommendations for analysis of censored multivariate data may have to be revised. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Zhang, Xu; Zhang, Mei-Jie; Fine, Jason
2012-01-01
With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288
NASA Astrophysics Data System (ADS)
Condron, Eoin; Nolan, Brien C.
2014-08-01
We investigate self-similar scalar field solutions to the Einstein equations in whole cylinder symmetry. Imposing self-similarity on the spacetime gives rise to a set of single variable functions describing the metric. Furthermore, it is shown that the scalar field is dependent on a single unknown function of the same variable and that the scalar field potential has exponential form. The Einstein equations then take the form of a set of ODEs. Self-similarity also gives rise to a singularity at the scaling origin. We extend the work of Condron and Nolan (2014 Class. Quantum Grav. 31 015015), which determined the global structure of all solutions with a regular axis in the causal past of the singularity. We identified a class of solutions that evolves through the past null cone of the singularity. We give the global structure of these solutions and show that the singularity is censored in all cases.
JPRS Report, Near East and South Asia.
1991-07-08
and our culture. We collaborators two excellent connoisseurs of Berber lan- are not going to play at being censors ," Ramdane adds. guage and culture, T...its conceived plan, wars will be started between countries propaganda war, America censored war coverage. The for their consumption. There is only one...way to avoid Western media which protested censor restrictions this new international imperialist system and that is, just imposed during the
Work Status Choice and the Distribution of Family Earnings.
1984-11-01
were in the market. Since wages for secondary earners are observable only for market participants, censoring corrections will have to be made to...obtain the true correlation of earners’ earnings. The problem of censoring corrections has been extensively studied in the female labor supply...earners are defined to be male members other than the HH. The censoring framework is fairly similar to the occupation-choice model discussed earlier
Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.
2015-01-01
In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871
Schaubel, Douglas E; Wei, Guanghui
2011-03-01
In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.
Kalia, Sumeet; Klar, Neil; Donner, Allan
2016-12-30
Cluster randomized trials (CRTs) involve the random assignment of intact social units rather than independent subjects to intervention groups. Time-to-event outcomes often are endpoints in CRTs. Analyses of such data need to account for the correlation among cluster members. The intracluster correlation coefficient (ICC) is used to assess the similarity among binary and continuous outcomes that belong to the same cluster. However, estimating the ICC in CRTs with time-to-event outcomes is a challenge because of the presence of censored observations. The literature suggests that the ICC may be estimated using either censoring indicators or observed event times. A simulation study explores the effect of administrative censoring on estimating the ICC. Results show that ICC estimators derived from censoring indicators or observed event times are negatively biased. Analytic work further supports these results. Observed event times are preferred to estimate the ICC under minimum frequency of administrative censoring. To our knowledge, the existing literature provides no practical guidance on the estimation of ICC when substantial amount of administrative censoring is present. The results from this study corroborate the need for further methodological research on estimating the ICC for correlated time-to-event outcomes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Kirby, David A
2017-09-01
In the mid-twentieth century film studios sent their screenplays to Hollywood's official censorship body, the Production Code Administration (PCA), and to the Catholic Church's Legion of Decency for approval and recommendations for revision. This article examines the negotiations between filmmakers and censorship groups in order to show the stories that censors did, and did not, want told about pregnancy, childbirth and abortion, as well as how studios fought to tell their own stories about human reproduction. I find that censors considered pregnancy to be a state of grace and a holy obligation that was restricted to married women. For censors, human reproduction was not only a private matter, it was also an unpleasant biological process whose entertainment value was questionable. They worried that realistic portrayals of pregnancy and childbirth would scare young women away from pursuing motherhood. In addition, I demonstrate how filmmakers overcame censors' strict prohibitions against abortion by utilizing ambiguity in their storytelling. Ultimately, I argue that censors believed that pregnancy and childbirth should be celebrated but not seen. But if pregnancy and childbirth were required then censors preferred mythic versions of motherhood instead of what they believed to be the sacred but horrific biological reality of human reproduction.
Censoring approach to the detection limits in X-ray fluorescence analysis
NASA Astrophysics Data System (ADS)
Pajek, M.; Kubala-Kukuś, A.
2004-10-01
We demonstrate that the effect of detection limits in the X-ray fluorescence analysis (XRF), which limits the determination of very low concentrations of trace elements and results in appearance of the so-called "nondetects", can be accounted for using the statistical concept of censoring. More precisely, the results of such measurements can be viewed as the left random censored data, which can further be analyzed using the Kaplan-Meier method correcting the data for the presence of nondetects. Using this approach, the results of measured, detection limit censored concentrations can be interpreted in a nonparametric manner including the correction for the nondetects, i.e. the measurements in which the concentrations were found to be below the actual detection limits. Moreover, using the Monte Carlo simulation technique we show that by using the Kaplan-Meier approach the corrected mean concentrations for a population of the samples can be estimated within a few percent uncertainties with respect of the simulated, uncensored data. This practically means that the final uncertainties of estimated mean values are limited in fact by the number of studied samples and not by the correction procedure itself. The discussed random-left censoring approach was applied to analyze the XRF detection-limit-censored concentration measurements of trace elements in biomedical samples.
NASA Astrophysics Data System (ADS)
Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.
2011-01-01
Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.
Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter Je; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong
2017-11-01
Feature selection is essential in medical area; however, its process becomes complicated with the presence of censoring which is the unique character of survival analysis. Most survival feature selection methods are based on Cox's proportional hazard model, though machine learning classifiers are preferred. They are less employed in survival analysis due to censoring which prevents them from directly being used to survival data. Among the few work that employed machine learning classifiers, partial logistic artificial neural network with auto-relevance determination is a well-known method that deals with censoring and perform feature selection for survival data. However, it depends on data replication to handle censoring which leads to unbalanced and biased prediction results especially in highly censored data. Other methods cannot deal with high censoring. Therefore, in this article, a new hybrid feature selection method is proposed which presents a solution to high level censoring. It combines support vector machine, neural network, and K-nearest neighbor classifiers using simple majority voting and a new weighted majority voting method based on survival metric to construct a multiple classifier system. The new hybrid feature selection process uses multiple classifier system as a wrapper method and merges it with iterated feature ranking filter method to further reduce features. Two endovascular aortic repair datasets containing 91% censored patients collected from two centers were used to construct a multicenter study to evaluate the performance of the proposed approach. The results showed the proposed technique outperformed individual classifiers and variable selection methods based on Cox's model such as Akaike and Bayesian information criterions and least absolute shrinkage and selector operator in p values of the log-rank test, sensitivity, and concordance index. This indicates that the proposed classifier is more powerful in correctly predicting the risk of re-intervention enabling doctor in selecting patients' future follow-up plan.
Pair correlation and twin primes revisited.
Conrey, Brian; Keating, Jonathan P
2016-10-01
We establish a connection between the conjectural two-over-two ratios formula for the Riemann zeta-function and a conjecture concerning correlations of a certain arithmetic function. Specifically, we prove that the ratios conjecture and the arithmetic correlations conjecture imply the same result. This casts a new light on the underpinnings of the ratios conjecture, which previously had been motivated by analogy with formulae in random matrix theory and by a heuristic recipe.
Pair correlation and twin primes revisited
NASA Astrophysics Data System (ADS)
Conrey, Brian; Keating, Jonathan P.
2016-10-01
We establish a connection between the conjectural two-over-two ratios formula for the Riemann zeta-function and a conjecture concerning correlations of a certain arithmetic function. Specifically, we prove that the ratios conjecture and the arithmetic correlations conjecture imply the same result. This casts a new light on the underpinnings of the ratios conjecture, which previously had been motivated by analogy with formulae in random matrix theory and by a heuristic recipe.
Covariate analysis of bivariate survival data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less
The association between cinacalcet use and missed in-center hemodialysis treatment rate.
Brunelli, Steven M; Sibbel, Scott; Dluzniewski, Paul J; Cooper, Kerry; Bensink, Mark E; Bradbury, Brian D
2016-11-01
Missed in-center hemodialysis treatments (MHT) are a general indicator of health status in hemodialysis patients. This analysis was conducted to estimate the association between cinacalcet use and MHT rate. We studied patients receiving hemodialysis and prescription benefits services from a large dialysis organization. Incident cinacalcet users were propensity score matched to controls on 31 demographic, clinical, and laboratory variables. We applied inverse probability (IP) of censoring and crossover weights to account for informative censoring. Weighted negative binomial modeling was used to estimate MHT rates and pooled logistics models were used to estimate the association between cinacalcet use and MHT. Baseline demographic and clinical variables included serum calcium, phosphorus, parathyroid hormone, and vitamin D use, and were balanced between 15,474 new cinacalcet users and 15,474 matched controls. In an analysis based on intention-to-treat principles, 40.8% of cinacalcet users and 46.5% of nonusers were censored. MHT rate was 13% lower among cinacalcet initiators versus controls: IP of censoring weighted incidence rate ratio was 0.87 (95% confidence interval [CI]: 0.84-0.90 p < 0.001). In analyses based on as-treated principles, 72.8% and 61.5% of cinacalcet users and nonusers, respectively, crossed over or were censored. MHT rate was 15% lower among cinacalcet initiators versus controls: IP of censoring/crossover weighted incidence rate ratio was 0.85 (95%CI: 0.82-0.87 p < 0.001). After controlling for indication and differential censoring, cinacalcet treatment was associated with lower MHT rates, which may reflect better health status. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The Jungle Universe: coupled cosmological models in a Lotka-Volterra framework
NASA Astrophysics Data System (ADS)
Perez, Jérôme; Füzfa, André; Carletti, Timoteo; Mélot, Laurence; Guedezounme, Lazare
2014-06-01
In this paper, we exploit the fact that the dynamics of homogeneous and isotropic Friedmann-Lemaître universes is a special case of generalized Lotka-Volterra system where the competitive species are the barotropic fluids filling the Universe. Without coupling between those fluids, Lotka-Volterra formulation offers a pedagogical and simple way to interpret usual Friedmann-Lemaître cosmological dynamics. A natural and physical coupling between cosmological fluids is proposed which preserves the structure of the dynamical equations. Using the standard tools of Lotka-Volterra dynamics, we obtain the general Lyapunov function of the system when one of the fluids is coupled to dark energy. This provides in a rigorous form a generic asymptotic behavior for cosmic expansion in presence of coupled species, beyond the standard de Sitter, Einstein-de Sitter and Milne cosmologies. Finally, we conjecture that chaos can appear for at least four interacting fluids.
Collapsing radiating stars with various equations of state
NASA Astrophysics Data System (ADS)
Brassel, Byron P.; Goswami, Rituparno; Maharaj, Sunil D.
2017-06-01
We study the gravitational collapse of radiating stars in the context of the cosmic censorship conjecture. We consider a generalized Vaidya spacetime with three concentric regions. The local internal atmosphere is a two-component system consisting of standard pressure-free, null radiation and an additional string fluid with energy density and nonzero pressure obeying all physically realistic energy conditions. The middle region is purely radiative which matches to a third region which is the Schwarzschild exterior. We outline the general mathematical framework to study the conditions on the mass function so that future-directed nonspacelike geodesics can terminate at the singularity in the past. Mass functions for several equations of state are analyzed using this framework and it is shown that the collapse in each case terminates at a locally naked central singularity. We calculate the strength of these singularities to show that they are strong curvature singularities which implies that no extension of spacetime through them is possible.
ERIC Educational Resources Information Center
Tuma, Nancy Brandon; Hannan, Michael T.
The document, part of a series of chapters described in SO 011 759, considers the problem of censoring in the analysis of event-histories (data on dated events, including dates of change from one qualitative state to another). Censoring refers to the lack of information on events that occur before or after the period for which data are available.…
Zhang, Jinguang
2017-01-01
At least in the United States, there are widespread concerns with advertising that encourages alcohol consumption, and previous research explains those concerns as aiming to protect others from the harm of excessive alcohol use. 1 Drawing on sexual strategies theory, we hypothesized that support of censoring pro-alcohol advertising is ultimately self-benefiting regardless of its altruistic effect at a proximate level. Excessive drinking positively correlates with having casual sex, and casual sex threatens monogamy, one of the major means with which people adopting a long-term sexual strategy increase their inclusive fitness. Then, one way for long-term strategists to protect monogamy, and thus their reproductive interest is to support censoring pro-alcohol advertising, thereby preventing others from becoming excessive drinkers (and consequently having casual sex) under media influence. Supporting this hypothesis, three studies consistently showed that restricted sociosexuality positively correlated with support of censoring pro-alcohol advertising before and after various value-, ideological-, and moral-foundation variables were controlled for. Also as predicted, Study 3 revealed a significant indirect effect of sociosexuality on censorship support through perceived media influence on others but not through perceived media influence on self. These findings further supported a self-interest analysis of issue opinions, extended third-person-effect research on support of censoring pro-alcohol advertising, and suggested a novel approach to analyzing media censorship support.
Henschel, Volkmar; Engel, Jutta; Hölzel, Dieter; Mansmann, Ulrich
2009-02-10
Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty. MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework. Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN. The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.
Model Calibration with Censored Data
Cao, Fang; Ba, Shan; Brenneman, William A.; ...
2017-06-28
Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less
Statistical analysis tables for truncated or censored samples
NASA Technical Reports Server (NTRS)
Cohen, A. C.; Cooley, C. G.
1971-01-01
Compilation describes characteristics of truncated and censored samples, and presents six illustrations of practical use of tables in computing mean and variance estimates for normal distribution using selected samples.
Nonparametric and Semiparametric Regression Estimation for Length-biased Survival Data
Shen, Yu; Ning, Jing; Qin, Jing
2016-01-01
For the past several decades, nonparametric and semiparametric modeling for conventional right-censored survival data has been investigated intensively under a noninformative censoring mechanism. However, these methods may not be applicable for analyzing right-censored survival data that arise from prevalent cohorts when the failure times are subject to length-biased sampling. This review article is intended to provide a summary of some newly developed methods as well as established methods for analyzing length-biased data. PMID:27086362
Developing Learning Theory by Refining Conjectures Embodied in Educational Designs
ERIC Educational Resources Information Center
Sandoval, William A.
2004-01-01
Designed learning environments embody conjectures about learning and instruction, and the empirical study of learning environments allows such conjectures to be refined over time. The construct of embodied conjecture is introduced as a way to demonstrate the theoretical nature of learning environment design and to frame methodological issues in…
A proof of the conjecture on the twin primes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo-ling, Zhou
2016-06-08
In this short note, we have proved the conjecture on twin primes using some thoughts of the set theory. Firstly, using the original sieve method and a new notation(concept)introduced by myself, the conjecture on twin primes is summed up as an elementary successive limit, afterwards we form a subsequence of positive integers,and using it,we prove that the successive limits are commutative and complete the proof of the conjecture on twin primes We also give a more straightforward proof of the conjecture.
Marginal regression analysis of recurrent events with coarsened censoring times.
Hu, X Joan; Rosychuk, Rhonda J
2016-12-01
Motivated by an ongoing pediatric mental health care (PMHC) study, this article presents weakly structured methods for analyzing doubly censored recurrent event data where only coarsened information on censoring is available. The study extracted administrative records of emergency department visits from provincial health administrative databases. The available information of each individual subject is limited to a subject-specific time window determined up to concealed data. To evaluate time-dependent effect of exposures, we adapt the local linear estimation with right censored survival times under the Cox regression model with time-varying coefficients (cf. Cai and Sun, Scandinavian Journal of Statistics 2003, 30, 93-111). We establish the pointwise consistency and asymptotic normality of the regression parameter estimator, and examine its performance by simulation. The PMHC study illustrates the proposed approach throughout the article. © 2016, The International Biometric Society.
Total-reflection X-ray fluorescence studies of trace elements in biomedical samples
NASA Astrophysics Data System (ADS)
Kubala-Kukuś, A.; Braziewicz, J.; Pajek, M.
2004-08-01
Application of the total-reflection X-ray fluorescence (TXRF) analysis in the studies of trace element contents in biomedical samples is discussed in the following aspects: (i) a nature of trace element concentration distributions, (ii) censoring approach to the detection limits, and (iii) a comparison of two sets of censored data. The paper summarizes the recent results achieved in this topics, in particular, the lognormal, or more general logstable, nature of concentration distribution of trace elements, the random left-censoring and the Kaplan-Meier approach accounting for detection limits and, finally, the application of the logrank test to compare the censored concentrations measured for two groups. These new aspects, which are of importance for applications of the TXRF in different fields, are discussed here in the context of TXRF studies of trace element in various samples of medical interest.
Chanat, Jeffrey G.; Moyer, Douglas L.; Blomquist, Joel D.; Hyer, Kenneth E.; Langland, Michael J.
2016-01-13
Inconsistencies related to changing laboratory methods were also examined via two manipulative experiments. In the first experiment, increasing and decreasing “stair-step” patterns of changes in censoring level, overall representing a factor-of-five change in the laboratory reporting limit, were artificially imposed on a 27-year record with no censoring and a period-of-record concentration trend of –68.4 percent. Trends estimated on the basis of the manipulated records were broadly similar to the original trend (–63.6 percent for decreasing censoring levels and –70.3 percent for increasing censoring levels), lending a degree of confidence that the survival regression routines upon which WRTDS is based are generally robust to data censoring. The second experiment considered an abrupt disappearance of low-concentration observations of total phosphorus, associated with a laboratory method change and not reflected through censoring, near the middle of a 28-year record. By process of elimination, an upward shift in the estimated flow-normalize concentration trend line around the same time was identified as a likely artifact resulting from the laboratory method change, although a contemporaneous change in watershed processes cannot be ruled out. Decisions as to how to treat records with potential sampling protocol or laboratory methods-related artifacts should be made on a case-by-case basis, and trend results should be appropriately qualified.
Estimation of indirect effect when the mediator is a censored variable.
Wang, Jian; Shete, Sanjay
2017-01-01
A mediation model explores the direct and indirect effects of an initial variable ( X) on an outcome variable ( Y) by including a mediator ( M). In many realistic scenarios, investigators observe censored data instead of the complete data. Current research in mediation analysis for censored data focuses mainly on censored outcomes, but not censored mediators. In this study, we proposed a strategy based on the accelerated failure time model and a multiple imputation approach. We adapted a measure of the indirect effect for the mediation model with a censored mediator, which can assess the indirect effect at both the group and individual levels. Based on simulation, we established the bias in the estimations of different paths (i.e. the effects of X on M [ a], of M on Y [ b] and of X on Y given mediator M [ c']) and indirect effects when analyzing the data using the existing approaches, including a naïve approach implemented in software such as Mplus, complete-case analysis, and the Tobit mediation model. We conducted simulation studies to investigate the performance of the proposed strategy compared to that of the existing approaches. The proposed strategy accurately estimates the coefficients of different paths, indirect effects and percentages of the total effects mediated. We applied these mediation approaches to the study of SNPs, age at menopause and fasting glucose levels. Our results indicate that there is no indirect effect of association between SNPs and fasting glucose level that is mediated through the age at menopause.
Annotation, submission and screening of repetitive elements in Repbase: RepbaseSubmitter and Censor.
Kohany, Oleksiy; Gentles, Andrew J; Hankus, Lukasz; Jurka, Jerzy
2006-10-25
Repbase is a reference database of eukaryotic repetitive DNA, which includes prototypic sequences of repeats and basic information described in annotations. Updating and maintenance of the database requires specialized tools, which we have created and made available for use with Repbase, and which may be useful as a template for other curated databases. We describe the software tools RepbaseSubmitter and Censor, which are designed to facilitate updating and screening the content of Repbase. RepbaseSubmitter is a java-based interface for formatting and annotating Repbase entries. It eliminates many common formatting errors, and automates actions such as calculation of sequence lengths and composition, thus facilitating curation of Repbase sequences. In addition, it has several features for predicting protein coding regions in sequences; searching and including Pubmed references in Repbase entries; and searching the NCBI taxonomy database for correct inclusion of species information and taxonomic position. Censor is a tool to rapidly identify repetitive elements by comparison to known repeats. It uses WU-BLAST for speed and sensitivity, and can conduct DNA-DNA, DNA-protein, or translated DNA-translated DNA searches of genomic sequence. Defragmented output includes a map of repeats present in the query sequence, with the options to report masked query sequence(s), repeat sequences found in the query, and alignments. Censor and RepbaseSubmitter are available as both web-based services and downloadable versions. They can be found at http://www.girinst.org/repbase/submission.html (RepbaseSubmitter) and http://www.girinst.org/censor/index.php (Censor).
On the Strong Direct Summand Conjecture
ERIC Educational Resources Information Center
McCullough, Jason
2009-01-01
In this thesis, our aim is the study the Vanishing of Maps of Tor Conjecture of Hochster and Huneke. We mainly focus on an equivalent characterization called the Strong Direct Summand Conjecture, due to N. Ranganathan. Our results are separated into three chapters. In Chapter 3, we prove special cases of the Strong Direct Summand Conjecture in…
NASA Astrophysics Data System (ADS)
Yang, Chao Yuan
2012-05-01
Anomalous decelerations of spacecraft Pioneer-10,11,etc could be interpreted as signal delay effect between speed of gravity and that of light as reflected in virtual scale, similar to covarying virtual scale effect in relative motion (http://arxiv.org/html/math-ph/0001019v5).A finite speed of gravity faster than light could be inferred (http://arXiv.org/html/physics/0001034v2). Measurements of gravitational variations by paraconical pendulum during a total solar eclipse infer the same(http://arXiv.org/html/physics/0001034v9). A finite Superluminal speed of gravity is the necessary condition to imply that there exists gravitational horizon (GH). Such "GH" of our Universe would stretch far beyond the cosmic event horizon of light. Dark energy may be owing to mutually interactive gravitational horizons of cousin universes. Sufficient condition for the conjecture is that the dark energy would be increasing with age of our Universe since accelerated expansion started about 5 Gyr ago, since more and more arrivals of "GH" of distant cousin universes would interact with "GH" of our Universe. The history of dark energy variations between then and now would be desirable(http://arXiv.org/html/physics/0001034). In "GH" conjecture, the neighborhood of cousin universes would be likely boundless in 4D-space-time without begining or end. The dark energy would keep all universes in continually accelerated expansion to eventual fragmentation. Fragments would crash and merge into bangs, big or small, to form another generation of cousin universes. These scenarios might offer a clue to what was before the big bang.
Research participant compensation: A matter of statistical inference as well as ethics.
Swanson, David M; Betensky, Rebecca A
2015-11-01
The ethics of compensation of research subjects for participation in clinical trials has been debated for years. One ethical issue of concern is variation among subjects in the level of compensation for identical treatments. Surprisingly, the impact of variation on the statistical inferences made from trial results has not been examined. We seek to identify how variation in compensation may influence any existing dependent censoring in clinical trials, thereby also influencing inference about the survival curve, hazard ratio, or other measures of treatment efficacy. In simulation studies, we consider a model for how compensation structure may influence the censoring model. Under existing dependent censoring, we estimate survival curves under different compensation structures and observe how these structures induce variability in the estimates. We show through this model that if the compensation structure affects the censoring model and dependent censoring is present, then variation in that structure induces variation in the estimates and affects the accuracy of estimation and inference on treatment efficacy. From the perspectives of both ethics and statistical inference, standardization and transparency in the compensation of participants in clinical trials are warranted. Copyright © 2015 Elsevier Inc. All rights reserved.
Helsel, Dennis R.; Gilliom, Robert J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.
The covariant entropy conjecture and concordance cosmological models
NASA Astrophysics Data System (ADS)
He, Song; Zhang, Hongbao
2008-10-01
Recently a covariant entropy conjecture has been proposed for dynamical horizons. We apply this conjecture to concordance cosmological models, namely, those cosmological models filled with perfect fluids, in the presence of a positive cosmological constant. As a result, we find that this conjecture has a severe constraint power. Not only does this conjecture rule out those cosmological models disfavored by the anthropic principle, but also it imposes an upper bound 10-60 on the cosmological constant for our own universe, which thus provides an alternative macroscopic perspective for understanding the long-standing cosmological constant problem.
The Cosmological Evolution of Radio Sources with CENSORS
NASA Technical Reports Server (NTRS)
Brookes, Mairi; Best, Philip; Peacock, John; Dunlop, James; Rottgering, Huub
2006-01-01
The CENSORS survey, selected from the NVSS, has been followed up using EIS, K-band imaging and spectroscopic observations to produce a radio sample capable of probing the source density in the regime: z greater than 2.5. With a current spectroscopic completeness of 62%, CENSORS has been used in direct modeling of RLF evolution and in V/V(sub max) tests. There is evidence for a shallow decline in number density of source in the luminosity range 10(sup 26) - 10(sup 27)WHz(sup -1) at 1.4GHz.
Roberts, Graham J; McDonald, Fraser; Andiappan, Manoharan; Lucas, Victoria S
2015-11-01
The final stage of dental development of third molars is usually helpful to indicate whether or not a subject is aged over 18 years. A complexity is that the final stage of development is unlimited in its upper border. Investigators usually select an inappropriate upper age limit or censor point for this tooth development stage. The literature was searched for appropriate data sets for dental age estimation and those that provided the count (n), the mean (x¯), and the standard deviation (sd) for each of the tooth development stages. The Demirjian G and Demirjian H were used for this study. Upper and lower limits of the Stage G and Stage H data were calculated limiting the data to plus or minus three standard deviations from the mean. The upper border of Stage H was limited by appropriate censoring at the maximum value for Stage G. The maximum age at attainment from published data, for Stage H, ranged from 22.60 years to 34.50 years. These data were explored to demonstrate how censoring provides an estimate for the correct maximum age for the final stage of Stage H as 21.64 years for UK Caucasians. This study shows that confining the data array of individual tooth developments stages to ± 3sd provides a reliable and logical way of censoring the data for tooth development stages with a Normal distribution of data. For Stage H this is inappropriate as it is unbounded in its upper limit. The use of a censored data array for Stage H using Percentile values is appropriate. This increases the reliability of using third molar Stage H alone to determine whether or not an individual is over 18 years old. For Stage H, individual ancestral groups should be censored using the same technique. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J
2014-07-01
High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. © The Author 2014. Published by Oxford University Press. All rights reserved.
Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J.
2014-01-01
Motivation: High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. Results: We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. Availability and implementation: The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. Contact: fbuettner.phys@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24618470
Analyzing survival curves at a fixed point in time for paired and clustered right-censored data
Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De
2018-01-01
In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
Protecting the Innocence of Youth: Moral Sanctity Values Underlie Censorship From Young Children.
Anderson, Rajen A; Masicampo, E J
2017-11-01
Three studies examined the relationship between people's moral values (drawing on moral foundations theory) and their willingness to censor immoral acts from children. Results revealed that diverse moral values did not predict censorship judgments. It was not the case that participants who valued loyalty and authority, respectively, sought to censor depictions of disloyal and disobedient acts. Rather, censorship intentions were predicted by a single moral value-sanctity. The more people valued sanctity, the more willing they were to censor from children, regardless of the types of violations depicted (impurity, disloyalty, disobedience, etc.). Furthermore, people who valued sanctity objected to indecent exposure only to apparently innocent and pure children-those who were relatively young and who had not been previously exposed to immoral acts. These data suggest that sanctity, purity, and the preservation of innocence underlie intentions to censor from young children.
Axion monodromy and the weak gravity conjecture
NASA Astrophysics Data System (ADS)
Hebecker, Arthur; Rompineve, Fabrizio; Westphal, Alexander
2016-04-01
Axions with broken discrete shift symmetry (axion monodromy) have recently played a central role both in the discussion of inflation and the `relaxion' approach to the hierarchy problem. We suggest a very minimalist way to constrain such models by the weak gravity conjecture for domain walls: while the electric side of the conjecture is always satisfied if the cosine-oscillations of the axion potential are sufficiently small, the magnetic side imposes a cutoff, Λ3 ˜ mf M pl, independent of the height of these `wiggles'. We compare our approach with the recent related proposal by Ibanez, Montero, Uranga and Valenzuela. We also discuss the non-trivial question which version, if any, of the weak gravity conjecture for domain walls should hold. In particular, we show that string compactifications with branes of different dimensions wrapped on different cycles lead to a `geometric weak gravity conjecture' relating volumes of cycles, norms of corresponding forms and the volume of the compact space. Imposing this `geometric conjecture', e.g. on the basis of the more widely accepted weak gravity conjecture for particles, provides at least some support for the (electric and magnetic) conjecture for domain walls.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Lucijanic, Marko; Petrovecki, Mladen
2012-01-01
Analyzing events over time is often complicated by incomplete, or censored, observations. Special non-parametric statistical methods were developed to overcome difficulties in summarizing and comparing censored data. Life-table (actuarial) method and Kaplan-Meier method are described with an explanation of survival curves. For the didactic purpose authors prepared a workbook based on most widely used Kaplan-Meier method. It should help the reader understand how Kaplan-Meier method is conceptualized and how it can be used to obtain statistics and survival curves needed to completely describe a sample of patients. Log-rank test and hazard ratio are also discussed.
Falcaro, Milena; Pickles, Andrew
2007-02-10
We focus on the analysis of multivariate survival times with highly structured interdependency and subject to interval censoring. Such data are common in developmental genetics and genetic epidemiology. We propose a flexible mixed probit model that deals naturally with complex but uninformative censoring. The recorded ages of onset are treated as possibly censored ordinal outcomes with the interval censoring mechanism seen as arising from a coarsened measurement of a continuous variable observed as falling between subject-specific thresholds. This bypasses the requirement for the failure times to be observed as falling into non-overlapping intervals. The assumption of a normal age-of-onset distribution of the standard probit model is relaxed by embedding within it a multivariate Box-Cox transformation whose parameters are jointly estimated with the other parameters of the model. Complex decompositions of the underlying multivariate normal covariance matrix of the transformed ages of onset become possible. The new methodology is here applied to a multivariate study of the ages of first use of tobacco and first consumption of alcohol without parental permission in twins. The proposed model allows estimation of the genetic and environmental effects that are shared by both of these risk behaviours as well as those that are specific. 2006 John Wiley & Sons, Ltd.
Comparison of dynamic treatment regimes via inverse probability weighting.
Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M
2006-03-01
Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.
Censoring distances based on labeled cortical distance maps in cortical morphometry.
Ceyhan, Elvan; Nishino, Tomoyuki; Alexopolous, Dimitrios; Todd, Richard D; Botteron, Kelly N; Miller, Michael I; Ratnanather, J Tilak
2013-01-01
It has been demonstrated that shape differences in cortical structures may be manifested in neuropsychiatric disorders. Such morphometric differences can be measured by labeled cortical distance mapping (LCDM) which characterizes the morphometry of the laminar cortical mantle of cortical structures. LCDM data consist of signed/labeled distances of gray matter (GM) voxels with respect to GM/white matter (WM) surface. Volumes and other summary measures for each subject and the pooled distances can help determine the morphometric differences between diagnostic groups, however they do not reveal all the morphometric information contained in LCDM distances. To extract more information from LCDM data, censoring of the pooled distances is introduced for each diagnostic group where the range of LCDM distances is partitioned at a fixed increment size; and at each censoring step, the distances not exceeding the censoring distance are kept. Censored LCDM distances inherit the advantages of the pooled distances but also provide information about the location of morphometric differences which cannot be obtained from the pooled distances. However, at each step, the censored distances aggregate, which might confound the results. The influence of data aggregation is investigated with an extensive Monte Carlo simulation analysis and it is demonstrated that this influence is negligible. As an illustrative example, GM of ventral medial prefrontal cortices (VMPFCs) of subjects with major depressive disorder (MDD), subjects at high risk (HR) of MDD, and healthy control (Ctrl) subjects are used. A significant reduction in laminar thickness of the VMPFC in MDD and HR subjects is observed compared to Ctrl subjects. Moreover, the GM LCDM distances (i.e., locations with respect to the GM/WM surface) for which these differences start to occur are determined. The methodology is also applicable to LCDM-based morphometric measures of other cortical structures affected by disease.
NASA Astrophysics Data System (ADS)
Bernardara, M.; Tabuada, G.
2016-06-01
Conjectures of Beilinson-Bloch type predict that the low-degree rational Chow groups of intersections of quadrics are one-dimensional. This conjecture was proved by Otwinowska in [20]. By making use of homological projective duality and the recent theory of (Jacobians of) non-commutative motives, we give an alternative proof of this conjecture in the case of a complete intersection of either two quadrics or three odd-dimensional quadrics. Moreover, we prove that in these cases the unique non-trivial algebraic Jacobian is the middle one. As an application, we make use of Vial's work [26], [27] to describe the rational Chow motives of these complete intersections and show that smooth fibrations into such complete intersections over bases S of small dimension satisfy Murre's conjecture (when \\dim (S)≤ 1), Grothendieck's standard conjecture of Lefschetz type (when \\dim (S)≤ 2), and Hodge's conjecture (when \\dim(S)≤ 3).
Scanning the parameter space of collapsing rotating thin shells
NASA Astrophysics Data System (ADS)
Rocha, Jorge V.; Santarelli, Raphael
2018-06-01
We present results of a comprehensive study of collapsing and bouncing thin shells with rotation, framing it in the context of the weak cosmic censorship conjecture. The analysis is based on a formalism developed specifically for higher odd dimensions that is able to describe the dynamics of collapsing rotating shells exactly. We analyse and classify a plethora of shell trajectories in asymptotically flat spacetimes. The parameters varied include the shell’s mass and angular momentum, its radial velocity at infinity, the (linear) equation-of-state parameter and the spacetime dimensionality. We find that plunges of rotating shells into black holes never produce naked singularities, as long as the matter shell obeys the weak energy condition, and so respects cosmic censorship. This applies to collapses of dust shells starting from rest or with a finite velocity at infinity. Not even shells with a negative isotropic pressure component (i.e. tension) lead to the formation of naked singularities, as long as the weak energy condition is satisfied. Endowing the shells with a positive isotropic pressure component allows for the existence of bouncing trajectories satisfying the dominant energy condition and fully contained outside rotating black holes. Otherwise any turning point occurs always inside the horizon. These results are based on strong numerical evidence from scans of numerous sections in the large parameter space available to these collapsing shells. The generalisation of the radial equation of motion to a polytropic equation-of-state for the matter shell is also included in an appendix.
Out of the white hole: a holographic origin for the Big Bang
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pourhasan, Razieh; Afshordi, Niayesh; Mann, Robert B., E-mail: rpourhasan@perimeterinstitute.ca, E-mail: nafshordi@pitp.ca, E-mail: rbmann@uwaterloo.ca
While most of the singularities of General Relativity are expected to be safely hidden behind event horizons by the cosmic censorship conjecture, we happen to live in the causal future of the classical Big Bang singularity, whose resolution constitutes the active field of early universe cosmology. Could the Big Bang be also hidden behind a causal horizon, making us immune to the decadent impacts of a naked singularity? We describe a braneworld description of cosmology with both 4d induced and 5D bulk gravity (otherwise known as Dvali-Gabadadze-Porati, or DGP model), which exhibits this feature: the universe emerges as a sphericalmore » 3-brane out of the formation of a 5D Schwarzschild black hole. In particular, we show that a pressure singularity of the holographic fluid, discovered earlier, happens inside the white hole horizon, and thus need not be real or imply any pathology. Furthermore, we outline a novel mechanism through which any thermal atmosphere for the brane, with comoving temperature of ∼20% of the 5D Planck mass can induce scale-invariant primordial curvature perturbations on the brane, circumventing the need for a separate process (such as cosmic inflation) to explain current cosmological observations. Finally, we note that 5D space-time is asymptotically flat, and thus potentially allows an S-matrix or (after minor modifications) an AdS/CFT description of the cosmological Big Bang.« less
Topology and incompleteness for 2+1-dimensional cosmological spacetimes
NASA Astrophysics Data System (ADS)
Fajman, David
2017-06-01
We study the long-time behavior of the Einstein flow coupled to matter on 2-dimensional surfaces. We consider massless matter models such as collisionless matter composed of massless particles, massless scalar fields and radiation fluids and show that the maximal globally hyperbolic development of homogeneous and isotropic initial data on the 2-sphere is geodesically incomplete in both time directions, i.e. the spacetime recollapses. This behavior also holds for open sets of initial data. In particular, we construct classes of recollapsing 2+1-dimensional spacetimes with spherical spatial topology which provide evidence for a closed universe recollapse conjecture for massless matter models in 2+1 dimensions. Furthermore, we construct solutions with toroidal and higher genus topology for the massless matter fields, which in both cases are future complete. The spacetimes with toroidal topology are 2+1-dimensional analogies of the Einstein-de Sitter model. In addition, we point out a general relation between the energy-momentum tensor and the Kretschmann scalar in 2+1 dimensions and use it to infer strong cosmic censorship for all these models. In view of this relation, we also recall corresponding models containing massive particles, constructed in a previous work and determine the nature of their initial singularities. We conclude that the global structure of non-vacuum cosmological spacetimes in 2+1 dimensions is determined by the mass of particles and—in the homogeneous and isotropic setting studied here—verifies strong cosmic censorship.
Censorship and Junk Food Journalism.
ERIC Educational Resources Information Center
Jensen, Carl
1984-01-01
Discusses journalistic phenomenon whereby Americans are inundated with same news with only names, dates, and locations changing. Highlights include news explosion, well-documented news, why "Ten Most Censored Stories" chosen by Project Censored (Sonoma State University, California) are not covered by major news media, federal policies,…
Jia, Erik; Chen, Tianlu
2018-01-01
Left-censored missing values commonly exist in targeted metabolomics datasets and can be considered as missing not at random (MNAR). Improper data processing procedures for missing values will cause adverse impacts on subsequent statistical analyses. However, few imputation methods have been developed and applied to the situation of MNAR in the field of metabolomics. Thus, a practical left-censored missing value imputation method is urgently needed. We developed an iterative Gibbs sampler based left-censored missing value imputation approach (GSimp). We compared GSimp with other three imputation methods on two real-world targeted metabolomics datasets and one simulation dataset using our imputation evaluation pipeline. The results show that GSimp outperforms other imputation methods in terms of imputation accuracy, observation distribution, univariate and multivariate analyses, and statistical sensitivity. Additionally, a parallel version of GSimp was developed for dealing with large scale metabolomics datasets. The R code for GSimp, evaluation pipeline, tutorial, real-world and simulated targeted metabolomics datasets are available at: https://github.com/WandeRum/GSimp. PMID:29385130
Sun, Jianguo; Feng, Yanqin; Zhao, Hui
2015-01-01
Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.
NASA Astrophysics Data System (ADS)
Endah, S. N.; Nugraheni, D. M. K.; Adhy, S.; Sutikno
2017-04-01
According to Law No. 32 of 2002 and the Indonesian Broadcasting Commission Regulation No. 02/P/KPI/12/2009 & No. 03/P/KPI/12/2009, stated that broadcast programs should not scold with harsh words, not harass, insult or demean minorities and marginalized groups. However, there are no suitable tools to censor those words automatically. Therefore, researches to develop a system of intelligent software to censor the words automatically are needed. To conduct censor, the system must be able to recognize the words in question. This research proposes the classification of speech divide into two classes using Support Vector Machine (SVM), first class is set of rude words and the second class is set of properly words. The speech pitch values as an input in SVM, it used for the development of the system for the Indonesian rude swear word. The results of the experiment show that SVM is good for this system.
Post-processing of multi-model ensemble river discharge forecasts using censored EMOS
NASA Astrophysics Data System (ADS)
Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian
2014-05-01
When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a daily basis over a period of three years. For the two catchments considered, this resulted in well calibrated and sharp forecast distributions over all lead-times from 1 to 114 h. Training observations tended to be better indicators for the dependence structure than the raw ensemble.
Global conjecturing process in pattern generalization problem
NASA Astrophysics Data System (ADS)
Sutarto; Nusantara, Toto; Subanji; Dwi Hastuti, Intan; Dafik
2018-04-01
The aim of this global conjecturing process based on the theory of APOS. The subjects used in study were 15 of 8th grade students of Junior High School. The data were collected using Pattern Generalization Problem (PGP) and interviews. After students had already completed PGP; moreover, they were interviewed using students work-based to understand the conjecturing process. These interviews were video taped. The result of study reveals that the global conjecturing process occurs at the phase of action in which subjects build a conjecture by observing and counting the number of squares completely without distinguishing between black or white squares, finaly at the phase of process, the object and scheme were perfectly performed.
On the degree conjecture for separability of multipartite quantum states
NASA Astrophysics Data System (ADS)
Hassan, Ali Saif M.; Joag, Pramod S.
2008-01-01
We settle the so-called degree conjecture for the separability of multipartite quantum states, which are normalized graph Laplacians, first given by Braunstein et al. [Phys. Rev. A 73, 012320 (2006)]. The conjecture states that a multipartite quantum state is separable if and only if the degree matrix of the graph associated with the state is equal to the degree matrix of the partial transpose of this graph. We call this statement to be the strong form of the conjecture. In its weak version, the conjecture requires only the necessity, that is, if the state is separable, the corresponding degree matrices match. We prove the strong form of the conjecture for pure multipartite quantum states using the modified tensor product of graphs defined by Hassan and Joag [J. Phys. A 40, 10251 (2007)], as both necessary and sufficient condition for separability. Based on this proof, we give a polynomial-time algorithm for completely factorizing any pure multipartite quantum state. By polynomial-time algorithm, we mean that the execution time of this algorithm increases as a polynomial in m, where m is the number of parts of the quantum system. We give a counterexample to show that the conjecture fails, in general, even in its weak form, for multipartite mixed states. Finally, we prove this conjecture, in its weak form, for a class of multipartite mixed states, giving only a necessary condition for separability.
Kowalski, Amanda
2016-01-02
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.
Linear regression analysis of survival data with missing censoring indicators.
Wang, Qihua; Dinse, Gregg E
2011-04-01
Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.
Quantile Regression with Censored Data
ERIC Educational Resources Information Center
Lin, Guixian
2009-01-01
The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…
Toward improved analysis of concentration data: Embracing nondetects.
Shoari, Niloofar; Dubé, Jean-Sébastien
2018-03-01
Various statistical tests on concentration data serve to support decision-making regarding characterization and monitoring of contaminated media, assessing exposure to a chemical, and quantifying the associated risks. However, the routine statistical protocols cannot be directly applied because of challenges arising from nondetects or left-censored observations, which are concentration measurements below the detection limit of measuring instruments. Despite the existence of techniques based on survival analysis that can adjust for nondetects, these are seldom taken into account properly. A comprehensive review of the literature showed that managing policies regarding analysis of censored data do not always agree and that guidance from regulatory agencies may be outdated. Therefore, researchers and practitioners commonly resort to the most convenient way of tackling the censored data problem by substituting nondetects with arbitrary constants prior to data analysis, although this is generally regarded as a bias-prone approach. Hoping to improve the interpretation of concentration data, the present article aims to familiarize researchers in different disciplines with the significance of left-censored observations and provides theoretical and computational recommendations (under both frequentist and Bayesian frameworks) for adequate analysis of censored data. In particular, the present article synthesizes key findings from previous research with respect to 3 noteworthy aspects of inferential statistics: estimation of descriptive statistics, hypothesis testing, and regression analysis. Environ Toxicol Chem 2018;37:643-656. © 2017 SETAC. © 2017 SETAC.
Han, Cong; Kronmal, Richard
2004-12-15
Box-Cox transformation is investigated for regression models for left-censored data. Examples are provided using coronary calcification data from the Multi-Ethnic Study of Atherosclerosis and pharmacokinetic data of a nicotine nasal spray. Copyright 2004 John Wiley & Sons, Ltd.
Beware! Here There Be Beasties: Responding to Fundamentalist Censors.
ERIC Educational Resources Information Center
Traw, Rick
1996-01-01
Describes a heated censorship controversy experienced in 1990 in the Sioux Falls, South Dakota, school district brought by fundamentalist censors against the "Impressions" reading series. Explores specific categories of complaints, such as the supernatural, folktales, and myths. Notes the influence of religion and racism. Includes an addendum of…
Multimedia data from two probability-based exposure studies were investigated in terms of how censoring of non-detects affected estimation of population parameters and associations. Appropriate methods for handling censored below-detection-limit (BDL) values in this context were...
James Moffett's Mistake: Ignoring the Rational Capacities of the Other
ERIC Educational Resources Information Center
Donehower, Kim
2013-01-01
Using Alasdair MacIntyre's theory of tradition-bound rationalities, this essay analyses James Moffett's depiction of the censors who opposed his "Interactions" textbook series in the Kanawha County, West Virginia, schools. Many reviewers have found Moffett's analysis of the censors in "Storm in the Mountains" even-handed and…
Anatomy of the First Amendment and a Look at Its Interpretation.
ERIC Educational Resources Information Center
Otto, Jean H.
1990-01-01
Dissects features of the First Amendment, concentrating on freedom of religion, speech, and press clauses. Highlights the Hazelwood School District v. Kuhlmeier case and its reverberations. Argues that, when school officials censor, students learn that government may censor. Suggests censorship is counterproductive to schools' mission to promote…
Averages of ratios of the Riemann zeta-function and correlations of divisor sums
NASA Astrophysics Data System (ADS)
Conrey, Brian; Keating, Jonathan P.
2017-10-01
Nonlinearity has published articles containing a significant number-theoretic component since the journal was first established. We examine one thread, concerning the statistics of the zeros of the Riemann zeta function. We extend this by establishing a connection between the ratios conjecture for the Riemann zeta-function and a conjecture concerning correlations of convolutions of Möbius and divisor functions. Specifically, we prove that the ratios conjecture and an arithmetic correlations conjecture imply the same result. This provides new support for the ratios conjecture, which previously had been motivated by analogy with formulae in random matrix theory and by a heuristic recipe. Our main theorem generalises a recent calculation pertaining to the special case of two-over-two ratios.
On the degree conjecture for separability of multipartite quantum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Ali Saif M.; Joag, Pramod S.
2008-01-15
We settle the so-called degree conjecture for the separability of multipartite quantum states, which are normalized graph Laplacians, first given by Braunstein et al. [Phys. Rev. A 73, 012320 (2006)]. The conjecture states that a multipartite quantum state is separable if and only if the degree matrix of the graph associated with the state is equal to the degree matrix of the partial transpose of this graph. We call this statement to be the strong form of the conjecture. In its weak version, the conjecture requires only the necessity, that is, if the state is separable, the corresponding degree matricesmore » match. We prove the strong form of the conjecture for pure multipartite quantum states using the modified tensor product of graphs defined by Hassan and Joag [J. Phys. A 40, 10251 (2007)], as both necessary and sufficient condition for separability. Based on this proof, we give a polynomial-time algorithm for completely factorizing any pure multipartite quantum state. By polynomial-time algorithm, we mean that the execution time of this algorithm increases as a polynomial in m, where m is the number of parts of the quantum system. We give a counterexample to show that the conjecture fails, in general, even in its weak form, for multipartite mixed states. Finally, we prove this conjecture, in its weak form, for a class of multipartite mixed states, giving only a necessary condition for separability.« less
Exploring Duopoly Markets with Conjectural Variations
ERIC Educational Resources Information Center
Julien, Ludovic A.; Musy, Olivier; Saïdi, Aurélien W.
2014-01-01
In this article, the authors investigate competitive firm behaviors in a two-firm environment assuming linear cost and demand functions. By introducing conjectural variations, they capture the different market structures as specific configurations of a more general model. Conjectural variations are based on the assumption that each firm believes…
Kowalski, Amanda
2015-01-01
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from −0.76 to −1.49, which are an order of magnitude larger than previous estimates. PMID:26977117
The Process of Student Cognition in Constructing Mathematical Conjecture
ERIC Educational Resources Information Center
Astawa, I. Wayan Puja; Budayasa, I. Ketut; Juniati, Dwi
2018-01-01
This research aims to describe the process of student cognition in constructing mathematical conjecture. Many researchers have studied this process but without giving a detailed explanation of how students understand the information to construct a mathematical conjecture. The researchers focus their analysis on how to construct and prove the…
Finding Conjectures Using Geometer's Sketchpad
ERIC Educational Resources Information Center
Fallstrom, Scott; Walter, Marion
2011-01-01
Conjectures, theorems, and problems in print often appear to come out of nowhere. Scott Fallstrom and Marion Walter describe how their thinking and conjectures evolved; they try to show how collaboration helped expand their ideas. By showing the results from working together, they hope readers will encourage collaboration amongst their students.…
How the Mind of a Censor Works: The Psychology of Censorship.
ERIC Educational Resources Information Center
Fine, Sara
1996-01-01
Explores censorship and examines it as a human dynamic. Discusses the authoritarian personality, the need to control, traditionalism and the need to belong to a group, the influence of family, denial, and authoritarian women. Describes the importance of listening to "the Censor" in order to encourage dialogue and how to use effective…
Teachers Making Decisions When We Know the Censors Are Watching.
ERIC Educational Resources Information Center
Napier, Minta
Attempts to suppress and even censor various texts used by English teachers often are led by members of fundamentalist Christian groups. These activists charge educators with depreciating Christian moral values and instigating a religion of "secular humanism" in the schools. Various examples of recent legal cases show how prominent the…
"Tropic of Cancer" and the Censors: A Case Study and Bibliographic Guide to the Literature.
ERIC Educational Resources Information Center
Kincaid, Larry; Koger, Grove
1997-01-01
Traces the history of Henry Miller's novel "Tropic of Cancer"--censored in England and America for being too obscene--from its inception in 1932 to its vindication by the United States judicial system 30 years later. Also includes an annotated bibliography of related literature. (AEF)
The purpose of this SOP is to describe the procedures undertaken to treat censored data which are below detection limits. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University of Arizona NHEXAS and Battelle Laboratorie...
White Racism/Black Signs: Censorship and Images of Race Relations.
ERIC Educational Resources Information Center
Patton, Cindy
1995-01-01
Discusses the simultaneous establishment of legal rights to censor film and proscriptions on particular racial representations. Describes several changes in the Hays Code that demonstrate a change in the censor's theory of the image. Suggests that these changes substituted the censorship of race-related images with a new prohibition on racial…
Obscenity, Profanity and the High School Press.
ERIC Educational Resources Information Center
Hansen, Kent A.
1979-01-01
School officials cannot censor or punish profanity and vulgarity in student publications without a showing that such action is essential for the maintenance of order and discipline or protects the rights of others or that the censored material satisfies the legal tests of obscenity. Available from Willamette University College of Law, Salem, OR…
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
A flexible cure rate model with dependent censoring and a known cure threshold.
Bernhardt, Paul W
2016-11-10
We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
MODELING LEFT-TRUNCATED AND RIGHT-CENSORED SURVIVAL DATA WITH LONGITUDINAL COVARIATES
Su, Yu-Ru; Wang, Jane-Ling
2018-01-01
There is a surge in medical follow-up studies that include longitudinal covariates in the modeling of survival data. So far, the focus has been largely on right censored survival data. We consider survival data that are subject to both left truncation and right censoring. Left truncation is well known to produce biased sample. The sampling bias issue has been resolved in the literature for the case which involves baseline or time-varying covariates that are observable. The problem remains open however for the important case where longitudinal covariates are present in survival models. A joint likelihood approach has been shown in the literature to provide an effective way to overcome those difficulties for right censored data, but this approach faces substantial additional challenges in the presence of left truncation. Here we thus propose an alternative likelihood to overcome these difficulties and show that the regression coefficient in the survival component can be estimated unbiasedly and efficiently. Issues about the bias for the longitudinal component are discussed. The new approach is illustrated numerically through simulations and data from a multi-center AIDS cohort study. PMID:29479122
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Cox model with interval-censored covariate in cohort studies.
Ahn, Soohyun; Lim, Johan; Paik, Myunghee Cho; Sacco, Ralph L; Elkind, Mitchell S
2018-05-18
In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow-up. The status of the secondary event serves as a time-varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time-varying covariates. While information on a typical time-varying covariate is missing for entire follow-up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval-censored. One may view interval-censored covariate of the secondary event status as missing time-varying covariates, yet missingness is partial since partial information is provided throughout the follow-up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval-censored covariates in the Cox proportional hazards model, we propose an available-data estimator, a doubly robust-type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Wijeysundera, Harindra C; Wang, Xuesong; Tomlinson, George; Ko, Dennis T; Krahn, Murray D
2012-01-01
Objective The aim of this study was to review statistical techniques for estimating the mean population cost using health care cost data that, because of the inability to achieve complete follow-up until death, are right censored. The target audience is health service researchers without an advanced statistical background. Methods Data were sourced from longitudinal heart failure costs from Ontario, Canada, and administrative databases were used for estimating costs. The dataset consisted of 43,888 patients, with follow-up periods ranging from 1 to 1538 days (mean 576 days). The study was designed so that mean health care costs over 1080 days of follow-up were calculated using naïve estimators such as full-sample and uncensored case estimators. Reweighted estimators – specifically, the inverse probability weighted estimator – were calculated, as was phase-based costing. Costs were adjusted to 2008 Canadian dollars using the Bank of Canada consumer price index (http://www.bankofcanada.ca/en/cpi.html). Results Over the restricted follow-up of 1080 days, 32% of patients were censored. The full-sample estimator was found to underestimate mean cost ($30,420) compared with the reweighted estimators ($36,490). The phase-based costing estimate of $37,237 was similar to that of the simple reweighted estimator. Conclusion The authors recommend against the use of full-sample or uncensored case estimators when censored data are present. In the presence of heavy censoring, phase-based costing is an attractive alternative approach. PMID:22719214
Groth, Caroline; Banerjee, Sudipto; Ramachandran, Gurumurthy; Stenzel, Mark R; Sandler, Dale P; Blair, Aaron; Engel, Lawrence S; Kwok, Richard K; Stewart, Patricia A
2017-01-01
In April 2010, the Deepwater Horizon oil rig caught fire and exploded, releasing almost 5 million barrels of oil into the Gulf of Mexico over the ensuing 3 months. Thousands of oil spill workers participated in the spill response and clean-up efforts. The GuLF STUDY being conducted by the National Institute of Environmental Health Sciences is an epidemiological study to investigate potential adverse health effects among these oil spill clean-up workers. Many volatile chemicals were released from the oil into the air, including total hydrocarbons (THC), which is a composite of the volatile components of oil including benzene, toluene, ethylbenzene, xylene, and hexane (BTEXH). Our goal is to estimate exposure levels to these toxic chemicals for groups of oil spill workers in the study (hereafter called exposure groups, EGs) with likely comparable exposure distributions. A large number of air measurements were collected, but many EGs are characterized by datasets with a large percentage of censored measurements (below the analytic methods' limits of detection) and/or a limited number of measurements. We use THC for which there was less censoring to develop predictive linear models for specific BTEXH air exposures with higher degrees of censoring. We present a novel Bayesian hierarchical linear model that allows us to predict, for different EGs simultaneously, exposure levels of a second chemical while accounting for censoring in both THC and the chemical of interest. We illustrate the methodology by estimating exposure levels for several EGs on the Development Driller III, a rig vessel charged with drilling one of the relief wells. The model provided credible estimates in this example for geometric means, arithmetic means, variances, correlations, and regression coefficients for each group. This approach should be considered when estimating exposures in situations when multiple chemicals are correlated and have varying degrees of censoring. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z
2017-04-18
When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics of the validation data such as the level of censoring and the distribution of the prognostic index derived in the validation setting before choosing the performance measures.
Censoring: a new approach for detection limits in total-reflection X-ray fluorescence
NASA Astrophysics Data System (ADS)
Pajek, M.; Kubala-Kukuś, A.; Braziewicz, J.
2004-08-01
It is shown that the detection limits in the total-reflection X-ray fluorescence (TXRF), which restrict quantification of very low concentrations of trace elements in the samples, can be accounted for using the statistical concept of censoring. We demonstrate that the incomplete TXRF measurements containing the so-called "nondetects", i.e. the non-measured concentrations falling below the detection limits and represented by the estimated detection limit values, can be viewed as the left random-censored data, which can be further analyzed using the Kaplan-Meier (KM) method correcting for nondetects. Within this approach, which uses the Kaplan-Meier product-limit estimator to obtain the cumulative distribution function corrected for the nondetects, the mean value and median of the detection limit censored concentrations can be estimated in a non-parametric way. The Monte Carlo simulations performed show that the Kaplan-Meier approach yields highly accurate estimates for the mean and median concentrations, being within a few percent with respect to the simulated, uncensored data. This means that the uncertainties of KM estimated mean value and median are limited in fact only by the number of studied samples and not by the applied correction procedure for nondetects itself. On the other hand, it is observed that, in case when the concentration of a given element is not measured in all the samples, simple approaches to estimate a mean concentration value from the data yield erroneous, systematically biased results. The discussed random-left censoring approach was applied to analyze the TXRF detection-limit-censored concentration measurements of trace elements in biomedical samples. We emphasize that the Kaplan-Meier approach allows one to estimate the mean concentrations being substantially below the mean level of detection limits. Consequently, this approach gives a new access to lower the effective detection limits for TXRF method, which is of prime interest for investigation of metallic impurities on the silicon wafers.
Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter J E; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong
2017-08-03
Feature selection (FS) process is essential in the medical area as it reduces the effort and time needed for physicians to measure unnecessary features. Choosing useful variables is a difficult task with the presence of censoring which is the unique characteristic in survival analysis. Most survival FS methods depend on Cox's proportional hazard model; however, machine learning techniques (MLT) are preferred but not commonly used due to censoring. Techniques that have been proposed to adopt MLT to perform FS with survival data cannot be used with the high level of censoring. The researcher's previous publications proposed a technique to deal with the high level of censoring. It also used existing FS techniques to reduce dataset dimension. However, in this paper a new FS technique was proposed and combined with feature transformation and the proposed uncensoring approaches to select a reduced set of features and produce a stable predictive model. In this paper, a FS technique based on artificial neural network (ANN) MLT is proposed to deal with highly censored Endovascular Aortic Repair (EVAR). Survival data EVAR datasets were collected during 2004 to 2010 from two vascular centers in order to produce a final stable model. They contain almost 91% of censored patients. The proposed approach used a wrapper FS method with ANN to select a reduced subset of features that predict the risk of EVAR re-intervention after 5 years to patients from two different centers located in the United Kingdom, to allow it to be potentially applied to cross-centers predictions. The proposed model is compared with the two popular FS techniques; Akaike and Bayesian information criteria (AIC, BIC) that are used with Cox's model. The final model outperforms other methods in distinguishing the high and low risk groups; as they both have concordance index and estimated AUC better than the Cox's model based on AIC, BIC, Lasso, and SCAD approaches. These models have p-values lower than 0.05, meaning that patients with different risk groups can be separated significantly and those who would need re-intervention can be correctly predicted. The proposed approach will save time and effort made by physicians to collect unnecessary variables. The final reduced model was able to predict the long-term risk of aortic complications after EVAR. This predictive model can help clinicians decide patients' future observation plan.
Conjecturing and Generalization Process on The Structural Development
NASA Astrophysics Data System (ADS)
Ni'mah, Khomsatun; Purwanto; Bambang Irawan, Edy; Hidayanto, Erry
2017-06-01
This study aims to describe how conjecturing process and generalization process of structural development to thirty children in middle school at grade 8 in solving problems of patterns. Processing of the data in this study uses qualitative data analysis techniques. The analyzed data is the data obtained through direct observation technique, documentation, and interviews. This study based on research studies Mulligan et al (2012) which resulted in a five - structural development stage, namely prestructural, emergent, partial, structural, and advance. From the analysis of the data in this study found there are two phenomena that is conjecturing and generalization process are related. During the conjecturing process, the childrens appropriately in making hypothesis of patterns problem through two phases, which are numerically and symbolically. Whereas during the generalization of process, the childrens able to related rule of pattern on conjecturing process to another context.
Responding Intelligently when Would-Be Censors Charge: "That Book Can Make Them...!"
ERIC Educational Resources Information Center
Martinson, David L.
2007-01-01
School administrators and teachers need to recognize that most persons--including would-be censors of school-related media communications--simply do not understand the complexities germane to measuring the impact of the mass media and the specific messages transmitted to broader audiences via a variety of media channels. In particular, what most…
Evolution of complexity following a global quench
NASA Astrophysics Data System (ADS)
Moosa, Mudassir
2018-03-01
The rate of complexification of a quantum state is conjectured to be bounded from above by the average energy of the state. A different conjecture relates the complexity of a holographic CFT state to the on-shell gravitational action of a certain bulk region. We use `complexity equals action' conjecture to study the time evolution of the complexity of the CFT state after a global quench. We find that the rate of growth of complexity is not only consistent with the conjectured bound, but it also saturates the bound soon after the system has achieved local equilibrium.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
Modelling of and Conjecturing on a Soccer Ball in a Korean Eighth Grade Mathematics Classroom
ERIC Educational Resources Information Center
Lee, Kyeong-Hwa
2011-01-01
The purpose of this article was to describe the task design and implementation of cultural artefacts in a mathematics lesson based on the integration of modelling and conjecturing perspectives. The conceived process of integrating a soccer ball into mathematics lessons via modelling- and conjecturing-based instruction was first detailed. Next, the…
Checking the Goldbach conjecture up to 4\\cdot 10^11
NASA Astrophysics Data System (ADS)
Sinisalo, Matti K.
1993-10-01
One of the most studied problems in additive number theory, Goldbach's conjecture, states that every even integer greater than or equal to 4 can be expressed as a sum of two primes. In this paper checking of this conjecture up to 4 \\cdot {10^{11}} by the IBM 3083 mainframe with vector processor is reported.
A NUMERICAL SIMULATION OF COSMIC RAY MODULATION NEAR THE HELIOPAUSE. II. SOME PHYSICAL INSIGHTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xi; Feng, Xueshang; Potgieter, Marius S.
Cosmic ray (CR) transport near the heliopause (HP) is studied using a hybrid transport model, with the parameters constrained by observations from the Voyager 1 spacecraft. We simulate the CR radial flux along different directions in the heliosphere. There is no well-defined thin layer between the solar wind region and the interstellar region along the tail and polar directions of the heliosphere. By analyzing the radial flux curve along the direction of Voyager 2 , together with its trajectory information, the crossing time of the HP by Voyager 2 is predicted to be in 2017.14. We simulate the CR radialmore » flux for different energy values along the direction of Voyager 1 . We find that there is only a modest modulation region of about 10 au wide beyond the HP, so that Voyager 1 observing the Local Interstellar Spectra is justified in numerical modeling. We analyze the heliospheric exit information of pseudo-particles in our stochastic numerical (time-backward) method, conjecturing that they represent the behavior of CR particles, and we find that pseudo-particles that have been traced from the nose region exit in the tail region. This implies that many CR particles diffuse directly from the heliospheric tail region to the nose region near the HP. In addition, when pseudo-particles were traced from the Local Interstellar Medium (LISM), it is found that their exit location (entrance for real particles) from the simulation domain is along the prescribed Interstellar Magnetic Field direction. This indicates that parallel diffusion dominates CR particle transport in the LISM.« less
ZOMG - I. How the cosmic web inhibits halo growth and generates assembly bias
NASA Astrophysics Data System (ADS)
Borzyszkowski, Mikolaj; Porciani, Cristiano; Romano-Díaz, Emilio; Garaldi, Enrico
2017-07-01
The clustering of dark matter haloes with fixed mass depends on their formation history, an effect known as assembly bias. We use zoom N-body simulations to investigate the origin of this phenomenon. For each halo at redshift z = 0, we determine the time in which the physical volume containing its final mass becomes stable. We consider five examples for which this happens at z ˜ 1.5 and two that do not stabilize by z = 0. The zoom simulations show that early-collapsing haloes do not grow in mass at z = 0 while late-forming ones show a net inflow. The reason is that 'accreting' haloes are located at the nodes of a network of thin filaments feeding them. Conversely, each 'stalled' halo lies within a prominent filament that is thicker than the halo size. Infalling material from the surroundings becomes part of the filament while matter within it recedes from the halo. We conclude that assembly bias originates from quenching halo growth due to tidal forces following the formation of non-linear structures in the cosmic web, as previously conjectured in the literature. Also the internal dynamics of the haloes change: the velocity anisotropy profile is biased towards radial (tangential) orbits in accreting (stalled) haloes. Our findings reveal the cause of the yet unexplained dependence of halo clustering on the anisotropy. Finally, we extend the excursion-set theory to account for these effects. A simple criterion based on the ellipticity of the linear tidal field combined with the spherical-collapse model provides excellent predictions for both classes of haloes.
NASA Astrophysics Data System (ADS)
Natário, José; Queimada, Leonel; Vicente, Rodrigo
2018-04-01
We rederive the equations of motion for relativistic strings, that is, one-dimensional elastic bodies whose internal energy depends only on their stretching, and use them to study circular string loops rotating in the equatorial plane of flat and black hole spacetimes. We start by obtaining the conditions for equilibrium, and find that: (i) if the string’s longitudinal speed of sound does not exceed the speed of light then its radius when rotating in Minkowski’s spacetime is always larger than its radius when at rest; (ii) in Minkowski’s spacetime, equilibria are linearly stable for rotation speeds below a certain threshold, higher than the string’s longitudinal speed of sound, and linearly unstable for some rotation speeds above it; (iii) equilibria are always linearly unstable in Schwarzschild’s spacetime. Moreover, we study interactions of a rotating string loop with a Kerr black hole, namely in the context of the weak cosmic censorship conjecture and the Penrose process. We find that: (i) elastic string loops that satisfy the null energy condition cannot overspin extremal black holes; (ii) elastic string loops that satisfy the dominant energy condition cannot increase the maximum efficiency of the usual particle Penrose process; (iii) if the dominant energy condition (but not the weak energy condition) is violated then the efficiency can be increased. This last result hints at the interesting possibility that the dominant energy condition may underlie the well known upper bounds for the efficiencies of energy extraction processes (including, for example, superradiance).
Analysis of competition performance in dressage and show jumping of Dutch Warmblood horses.
Rovere, G; Ducro, B J; van Arendonk, J A M; Norberg, E; Madsen, P
2016-12-01
Most Warmblood horse studbooks aim to improve the performance in dressage and show jumping. The Dutch Royal Warmblood Studbook (KWPN) includes the highest score achieved in competition by a horse to evaluate its genetic ability of performance. However, the records collected during competition are associated with some aspects that might affect the quality of the genetic evaluation based on these records. These aspects include the influence of rider, censoring and preselection of the data. The aim of this study was to quantify the impact of rider effect, censoring and preselection on the genetic analysis of competition data of dressage and show jumping of KWPN. Different models including rider effect were evaluated. To assess the impact of censoring, genetic parameters were estimated in data sets that differed in the degree of censoring. The effect of preselection on variance components was analysed by defining a binary trait (sport-status) depending on whether the horse has a competition record or not. This trait was included in a bivariate model with the competition trait and used all horses registered by KWPN since 1984. Results showed that performance in competition for dressage and show jumping is a heritable trait (h 2 ~ 0.11-0.13) and that it is important to account for the effect of rider in the genetic analysis. Censoring had a small effect on the genetic parameter for highest performance achieved by the horse. A moderate heritability obtained for sport-status indicates that preselection has a genetic basis, but the effect on genetic parameters was relatively small. © 2016 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Kubala-Kukuś, A.; Banaś, D.; Braziewicz, J.; Góźdź, S.; Majewska, U.; Pajek, M.
2007-07-01
The total reflection X-ray fluorescence method was applied to study the trace element concentrations in human breast malignant and breast benign neoplasm tissues taken from the women who were patients of Holycross Cancer Centre in Kielce (Poland). These investigations were mainly focused on the development of new possibilities of cancer diagnosis and therapy monitoring. This systematic comparative study was based on relatively large (˜ 100) population studied, namely 26 samples of breast malignant and 68 samples of breast benign neoplasm tissues. The concentrations, being in the range from a few ppb to 0.1%, were determined for thirteen elements (from P to Pb). The results were carefully analysed to investigate the concentration distribution of trace elements in the studied samples. The measurements of concentration of trace elements by total reflection X-ray fluorescence were limited, however, by the detection limit of the method. It was observed that for more than 50% of elements determined, the concentrations were not measured in all samples. These incomplete measurements were treated within the statistical concept called left-random censoring and for the estimation of the mean value and median of censored concentration distributions, the Kaplan-Meier estimator was used. For comparison of concentrations in two populations, the log-rank test was applied, which allows to compare the censored total reflection X-ray fluorescence data. Found statistically significant differences are discussed in more details. It is noted that described data analysis procedures should be the standard tool to analyze the censored concentrations of trace elements analysed by X-ray fluorescence methods.
James W. Evans; David W. Green
2007-01-01
Reliability estimates for the resistance distribution of wood product properties may be made from test data where all specimens are broken (full data sets) or by using data sets where information is obtained only from the weaker pieces in the distribution (censored data). Whereas considerable information exists on property estimation from full data sets, much less...
Stanley, T.R.; Newmark, W.D.
2010-01-01
In the East Usambara Mountains in northeast Tanzania, research on the effects of forest fragmentation and disturbance on nest survival in understory birds resulted in the accumulation of 1,002 nest records between 2003 and 2008 for 8 poorly studied species. Because information on the length of the incubation and nestling stages in these species is nonexistent or sparse, our objectives in this study were (1) to estimate the length of the incubation and nestling stage and (2) to compute nest survival using these estimates in combination with calculated daily survival probability. Because our data were interval censored, we developed and applied two new statistical methods to estimate stage length. In the 8 species studied, the incubation stage lasted 9.6-21.8 days and the nestling stage 13.9-21.2 days. Combining these results with estimates of daily survival probability, we found that nest survival ranged from 6.0% to 12.5%. We conclude that our methodology for estimating stage lengths from interval-censored nest records is a reasonable and practical approach in the presence of interval-censored data. ?? 2010 The American Ornithologists' Union.
León, Larry F; Cai, Tianxi
2012-04-01
In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.
Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu
2014-12-30
Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.
Boonpitaksathit, Teelana; Hunt, Nigel; Roberts, Graham J; Petrie, Aviva; Lucas, Victoria S
2011-10-01
The root of the third permanent molar is the only dental structure that continues development after completion of growth of the second permanent molar. It is claimed that the lack of a clearly defined end point for completion of growth of the third permanent molar means that this tooth cannot be used for dental age assessment. The aim of this study was to estimate the mean age of attainment of the four stages (E, F, G, and H) of root development of the third molar. The way in which the end point of completion of stage H can be identified is described. A total of 1223 dental panoramic tomographs (DPTs) available in the archives of the Eastman Dental Hospital, London, were used for this study. The ages of the subjects ranged from 12.6 to 24.9 years with 63 per cent of the sample being female. Demirjan's tooth development stages (TDSs), for the first and second molars, were applied to the third molars by a single examiner. For each of stages E, F, and G and for stage H censored data, the mean ages of the males and females were compared, separately within each tooth morphology type using the two sample t-test (P < 0.01). The same test was used to compare the mean ages of the upper and lower third molars on each side, separately for each gender. The mean age of attainment and the 99 per cent confidence interval (CI) for each TDS were calculated for each third molar. The final stage H data were appropriately censored to exclude data above the age of completion of root growth. The results showed that, for each gender, the age in years at which individuals attained each of the four TDSs was approximately normally distributed. The mean age for appropriately censored data was always lower than the corresponding mean age of the inappropriately censored data for stage H (male UR8 19.57, UL8 19.53, LL8 19.91, and LR8 20.02 and female UR8 20.08, UL8 20.13, LL8 20.78, and LR8 20.70). This inappropriately censored data overestimated the mean age for stage H. The appropriately censored data for the TDSs of the third molar may be used to estimate the age of adolescents and emerging adults assuming average growth and development and recent attainment of stage H.
ERIC Educational Resources Information Center
Fiallo, Jorge; Gutiérrez, Angel
2017-01-01
We present results from a classroom-based intervention designed to help a class of grade 10 students (14-15 years old) learn proof while studying trigonometry in a dynamic geometry software environment. We analysed some students' solutions to conjecture-and-proof problems that let them gain experience in stating conjectures and developing proofs.…
Sino-Japanese Relations: Cooperation, Competition, or Status Quo?
2008-03-01
prostitute and censors were concerned the film might reignite anti- Japanese sentiment.69 Regarding Prime Minister Abe’s potentially nationalistic visit...central government censored the movie “Memoirs of a Geisha” because the lead character, portrayed by a Chinese actress, could be construed as a...Thailand, Malaysia and Indonesia. Realizing the importance of the larger relationship, on September 1-3, 2007, Defense Minister Masahiko Komura met
The refined Swampland Distance Conjecture in Calabi-Yau moduli spaces
NASA Astrophysics Data System (ADS)
Blumenhagen, Ralph; Klaewer, Daniel; Schlechter, Lorenz; Wolf, Florian
2018-06-01
The Swampland Distance Conjecture claims that effective theories derived from a consistent theory of quantum gravity only have a finite range of validity. This will imply drastic consequences for string theory model building. The refined version of this conjecture says that this range is of the order of the naturally built in scale, namely the Planck scale. It is investigated whether the Refined Swampland Distance Conjecture is consistent with proper field distances arising in the well understood moduli spaces of Calabi-Yau compactification. Investigating in particular the non-geometric phases of Kähler moduli spaces of dimension h 11 ∈ {1 , 2 , 101}, we always find proper field distances that are smaller than the Planck-length.
Exact finite volume expectation values of local operators in excited states
NASA Astrophysics Data System (ADS)
Pozsgay, B.; Szécsényi, I. M.; Takács, G.
2015-04-01
We present a conjecture for the exact expression of finite volume expectation values in excited states in integrable quantum field theories, which is an extension of an earlier conjecture to the case of general diagonal factorized scattering with bound states and a nontrivial bootstrap structure. The conjectured expression is a spectral expansion which uses the exact form factors and the excited state thermodynamic Bethe Ansatz as building blocks. The conjecture is proven for the case of the trace of the energy-moment tensor. Concerning its validity for more general operators, we provide numerical evidence using the truncated conformal space approach. It is found that the expansion fails to be well-defined for small values of the volume in cases when the singularity structure of the TBA equations undergoes a non-trivial rearrangement under some critical value of the volume. Despite these shortcomings, the conjectured expression is expected to be valid for all volumes for most of the excited states, and as an expansion above the critical volume for the rest.
NASA Astrophysics Data System (ADS)
Pichon, C.; Pogosyan, D.; Kimm, T.; Slyz, A.; Devriendt, J.; Dubois, Y.
2011-12-01
State-of-the-art hydrodynamical simulations show that gas inflow through the virial sphere of dark matter haloes is focused (i.e. has a preferred inflow direction), consistent (i.e. its orientation is steady in time) and amplified (i.e. the amplitude of its advected specific angular momentum increases with time). We explain this to be a consequence of the dynamics of the cosmic web within the neighbourhood of the halo, which produces steady, angular momentum rich, filamentary inflow of cold gas. On large scales, the dynamics within neighbouring patches drives matter out of the surrounding voids, into walls and filaments before it finally gets accreted on to virialized dark matter haloes. As these walls/filaments constitute the boundaries of asymmetric voids, they acquire a net transverse motion, which explains the angular momentum rich nature of the later infall which comes from further away. We conjecture that this large-scale driven consistency explains why cold flows are so efficient at building up high-redshift thin discs inside out.
Cool Cosmology: ``WHISPER" better than ``BANG"
NASA Astrophysics Data System (ADS)
Carr, Paul
2007-10-01
Cosmologist Fred Hoyle coined ``big bang'' as a term of derision for Belgian priest George Lemaitre's prediction that the universe had originated from the expansion of a ``primeval atom'' in space-time. Hoyle referred to Lamaitre's hypothesis sarcastically as ``this big bang idea'' during a program broadcast on March 28, 1949 on the BBC. Hoyle's continuous creation or steady state theory can not explain the microwave background radiation or cosmic whisper discovered by Penzias and Wilson in 1964. The expansion and subsequent cooling of Lemaitre's hot ``primeval atom'' explains the whisper. ``Big bang'' makes no physical sense, as there was no matter (or space) to carry the sound that Hoyle's term implies. The ``big bang'' is a conjecture. New discoveries may be able to predict the observed ``whispering cosmos'' as well as dark matter and the nature of dark energy. The ``whispering universe'' is cooler cosmology than the big bang. Reference: Carr, Paul H. 2006. ``From the 'Music of the Spheres' to the 'Whispering Cosmos.' '' Chapter 3 of Beauty in Science and Spirit. Beech River Books. Center Ossipee, NH, http://www.MirrorOfNature.org.
Dark energy, non-minimal couplings and the origin of cosmic magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiménez, Jose Beltrán; Maroto, Antonio L., E-mail: jobeltra@fis.ucm.es, E-mail: maroto@fis.ucm.es
2010-12-01
In this work we consider the most general electromagnetic theory in curved space-time leading to linear second order differential equations, including non-minimal couplings to the space-time curvature. We assume the presence of a temporal electromagnetic background whose energy density plays the role of dark energy, as has been recently suggested. Imposing the consistency of the theory in the weak-field limit, we show that it reduces to standard electromagnetism in the presence of an effective electromagnetic current which is generated by the momentum density of the matter/energy distribution, even for neutral sources. This implies that in the presence of dark energy,more » the motion of large-scale structures generates magnetic fields. Estimates of the present amplitude of the generated seed fields for typical spiral galaxies could reach 10{sup −9} G without any amplification. In the case of compact rotating objects, the theory predicts their magnetic moments to be related to their angular momenta in the way suggested by the so called Schuster-Blackett conjecture.« less
Censored Glauber Dynamics for the Mean Field Ising Model
NASA Astrophysics Data System (ADS)
Ding, Jian; Lubetzky, Eyal; Peres, Yuval
2009-11-01
We study Glauber dynamics for the Ising model on the complete graph on n vertices, known as the Curie-Weiss Model. It is well known that at high temperature ( β<1) the mixing time is Θ( nlog n), whereas at low temperature ( β>1) it is exp ( Θ( n)). Recently, Levin, Luczak and Peres considered a censored version of this dynamics, which is restricted to non-negative magnetization. They proved that for fixed β>1, the mixing-time of this model is Θ( nlog n), analogous to the high-temperature regime of the original dynamics. Furthermore, they showed cutoff for the original dynamics for fixed β<1. The question whether the censored dynamics also exhibits cutoff remained unsettled. In a companion paper, we extended the results of Levin et al. into a complete characterization of the mixing-time for the Curie-Weiss model. Namely, we found a scaling window of order 1/sqrt{n} around the critical temperature β c =1, beyond which there is cutoff at high temperature. However, determining the behavior of the censored dynamics outside this critical window seemed significantly more challenging. In this work we answer the above question in the affirmative, and establish the cutoff point and its window for the censored dynamics beyond the critical window, thus completing its analogy to the original dynamics at high temperature. Namely, if β=1+ δ for some δ>0 with δ 2 n→∞, then the mixing-time has order ( n/ δ)log ( δ 2 n). The cutoff constant is (1/2+[2(ζ2 β/ δ-1)]-1), where ζ is the unique positive root of g( x)=tanh ( β x)- x, and the cutoff window has order n/ δ.
Accelerated failure time models for semi-competing risks data in the presence of complex censoring.
Lee, Kyu Ha; Rondeau, Virginie; Haneuse, Sebastien
2017-12-01
Statistical analyses that investigate risk factors for Alzheimer's disease (AD) are often subject to a number of challenges. Some of these challenges arise due to practical considerations regarding data collection such that the observation of AD events is subject to complex censoring including left-truncation and either interval or right-censoring. Additional challenges arise due to the fact that study participants under investigation are often subject to competing forces, most notably death, that may not be independent of AD. Towards resolving the latter, researchers may choose to embed the study of AD within the "semi-competing risks" framework for which the recent statistical literature has seen a number of advances including for the so-called illness-death model. To the best of our knowledge, however, the semi-competing risks literature has not fully considered analyses in contexts with complex censoring, as in studies of AD. This is particularly the case when interest lies with the accelerated failure time (AFT) model, an alternative to the traditional multiplicative Cox model that places emphasis away from the hazard function. In this article, we outline a new Bayesian framework for estimation/inference of an AFT illness-death model for semi-competing risks data subject to complex censoring. An efficient computational algorithm that gives researchers the flexibility to adopt either a fully parametric or a semi-parametric model specification is developed and implemented. The proposed methods are motivated by and illustrated with an analysis of data from the Adult Changes in Thought study, an on-going community-based prospective study of incident AD in western Washington State. © 2017, The International Biometric Society.
Methods for analysis of the occurrence of abscess in patients with pancreatitis.
Roca-Antonio, J; Escudero, L E; Gener, J; Oller, B; Rodríguez, N; Muñoz, A
1997-01-01
Standard survival analysis methods are useful for data involving censored cases when cures do not generally occur. If the object is to study, for instance, the development of a complication in the progress of an infectious disease, some people may be cured before complications develop. In this article, we provide methods for the analysis of data when cures do occur. An example is a study of prognostic factors for pancreatic abscess in patients with pancreatitis, some of whom leave the risk set because the pancreatitis clears. We present methods for estimating the survival curves and comparing hazard function for two objectives: (1) the occurrence of an abscess, irrespective of whether the patients are cured or not, and (2) the occurrence of an abscess for patients who, at that stage, have not been cured. We illustrate the applications of the methods using a sample of 50 patients with severe pancreatitis. To study the occurrence of an abscess, regardless of whether the patients are cured or not, we show that the appropriate strategy is to assign to the cured patients an infinite time to the appearance of an abscess. If the cured were considered censored at the moment the pancreatitis cleared, this would result in an overestimation of the hazard of presenting an abscess. On the other hand, if the objective is to compare the occurrence of abscess according to an exposure for patients who have not been cured, one needs to censor the cured patients at the time they are cured. For the analysis of survival data in the context of infectious diseases when cure is possible, it is important to use a censoring strategy that is pertinent to the specific aims of the study. Considering cures as censored at the time of cure is not always appropriate.
Lee, L.; Helsel, D.
2007-01-01
Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.
Guo, Ying; Manatunga, Amita K
2009-03-01
Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We present a modified weighted kappa coefficient to measure agreement between bivariate discrete survival times. The proposed kappa coefficient accommodates censoring by redistributing the mass of censored observations within the grid where the unobserved events may potentially happen. A generalized modified weighted kappa is proposed for multivariate discrete survival times. We estimate the modified kappa coefficients nonparametrically through a multivariate survival function estimator. The asymptotic properties of the kappa estimators are established and the performance of the estimators are examined through simulation studies of bivariate and trivariate survival times. We illustrate the application of the modified kappa coefficient in the presence of censored observations with data from a prostate cancer study.
An estimator of the survival function based on the semi-Markov model under dependent censorship.
Lee, Seung-Yeoun; Tsai, Wei-Yann
2005-06-01
Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.
King, Gary; Pan, Jennifer; Roberts, Margaret E
2014-08-22
Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and--with their software, documentation, and even customer support--reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored. Copyright © 2014, American Association for the Advancement of Science.
Ryberg, Karen R.; Vecchia, Aldo V.
2013-01-01
The seawaveQ R package fits a parametric regression model (seawaveQ) to pesticide concentration data from streamwater samples to assess variability and trends. The model incorporates the strong seasonality and high degree of censoring common in pesticide data and users can incorporate numerous ancillary variables, such as streamflow anomalies. The model is fitted to pesticide data using maximum likelihood methods for censored data and is robust in terms of pesticide, stream location, and degree of censoring of the concentration data. This R package standardizes this methodology for trend analysis, documents the code, and provides help and tutorial information, as well as providing additional utility functions for plotting pesticide and other chemical concentration data.
USSR Report, International Affairs
1986-05-28
examined on the material of four countries in Southeast Asia: Indonesia, Malaysia , Thailand and the Philippines. In his study, the author proceeded...television [as published] are guaranteed. There is no censorship." In other words, in the FRG there are no official censors , and in West German...34: "The Federal Republic does not need an official censor , for self-censorship—above all among the bosses of the mass media, i. e., television, radio
Causal inference in survival analysis using pseudo-observations.
Andersen, Per K; Syriopoulou, Elisavet; Parner, Erik T
2017-07-30
Causal inference for non-censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization ('G-formula') or (2) inverse probability of treatment assignment weights ('propensity score'). To do causal inference in survival analysis, one needs to address right-censoring, and often, special techniques are required for that purpose. We will show how censoring can be dealt with 'once and for all' by means of so-called pseudo-observations when doing causal inference in survival analysis. The pseudo-observations can be used as a replacement of the outcomes without censoring when applying 'standard' causal inference methods, such as (1) or (2) earlier. We study this idea for estimating the average causal effect of a binary treatment on the survival probability, the restricted mean lifetime, and the cumulative incidence in a competing risks situation. The methods will be illustrated in a small simulation study and via a study of patients with acute myeloid leukemia who received either myeloablative or non-myeloablative conditioning before allogeneic hematopoetic cell transplantation. We will estimate the average causal effect of the conditioning regime on outcomes such as the 3-year overall survival probability and the 3-year risk of chronic graft-versus-host disease. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
Yang-Baxter deformations of W2,4 × T1,1 and the associated T-dual models
NASA Astrophysics Data System (ADS)
Sakamoto, Jun-ichi; Yoshida, Kentaroh
2017-08-01
Recently, for principal chiral models and symmetric coset sigma models, Hoare and Tseytlin proposed an interesting conjecture that the Yang-Baxter deformations with the homogeneous classical Yang-Baxter equation are equivalent to non-abelian T-dualities with topological terms. It is significant to examine this conjecture for non-symmetric (i.e., non-integrable) cases. Such an example is the W2,4 ×T 1 , 1 background. In this note, we study Yang-Baxter deformations of type IIB string theory defined on W2,4 ×T 1 , 1 and the associated T-dual models, and show that this conjecture is valid even for this case. Our result indicates that the conjecture would be valid beyond integrability.
Time-dependent simulation of oblique MHD cosmic-ray shocks using the two-fluid model
NASA Technical Reports Server (NTRS)
Frank, Adam; Jones, T. W.; Ryu, Dongsu
1995-01-01
Using a new, second-order accurate numerical method we present dynamical simulations of oblique MHD cosmic-ray (CR)-modified plane shock evolution. Most of the calculations are done with a two-fluid model for diffusive shock acceleration, but we provide also comparisons between a typical shock computed that way against calculations carried out using the more complete, momentum-dependent, diffusion-advection equation. We also illustrate a test showing that these simulations evolve to dynamical equilibria consistent with previously published steady state analytic calculations for such shocks. In order to improve understanding of the dynamical role of magnetic fields in shocks modified by CR pressure we have explored for time asymptotic states the parameter space of upstream fast mode Mach number, M(sub f), and plasma beta. We compile the results into maps of dynamical steady state CR acceleration efficiency, epsilon(sub c). We have run simulations using constant, and nonisotropic, obliquity (and hence spatially) dependent forms of the diffusion coefficient kappa. Comparison of the results shows that while the final steady states achieved are the same in each case, the history of CR-MHD shocks can be strongly modified by variations in kappa and, therefore, in the acceleration timescale. Also, the coupling of CR and MHD in low beta, oblique shocks substantially influences the transient density spike that forms in strongly CR-modified shocks. We find that inside the density spike a MHD slow mode wave can be generated that eventually steepens into a shock. A strong layer develops within the density spike, driven by MHD stresses. We conjecture that currents in the shear layer could, in nonplanar flows, results in enhanced particle accretion through drift acceleration.
Anisotropic cosmological solutions in massive vector theories
NASA Astrophysics Data System (ADS)
Heisenberg, Lavinia; Kase, Ryotaro; Tsujikawa, Shinji
2016-11-01
In beyond-generalized Proca theories including the extension to theories higher than second order, we study the role of a spatial component v of a massive vector field on the anisotropic cosmological background. We show that, as in the case of the isotropic cosmological background, there is no additional ghostly degrees of freedom associated with the Ostrogradski instability. In second-order generalized Proca theories we find the existence of anisotropic solutions on which the ratio between the anisotropic expansion rate Σ and the isotropic expansion rate H remains nearly constant in the radiation-dominated epoch. In the regime where Σ/H is constant, the spatial vector component v works as a dark radiation with the equation of state close to 1/3. During the matter era, the ratio Σ/H decreases with the decrease of v. As long as the conditions |Σ| ll H and v2 ll phi2 are satisfied around the onset of late-time cosmic acceleration, where phi is the temporal vector component, we find that the solutions approach the isotropic de Sitter fixed point (Σ = 0 = v) in accordance with the cosmic no-hair conjecture. In the presence of v and Σ the early evolution of the dark energy equation of state wDE in the radiation era is different from that in the isotropic case, but the approach to the isotropic value wDE(iso) typically occurs at redshifts z much larger than 1. Thus, apart from the existence of dark radiation, the anisotropic cosmological dynamics at low redshifts is similar to that in isotropic generalized Proca theories. In beyond-generalized Proca theories the only consistent solution to avoid the divergence of a determinant of the dynamical system corresponds to v = 0, so Σ always decreases in time.
Anisotropic cosmological solutions in massive vector theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heisenberg, Lavinia; Kase, Ryotaro; Tsujikawa, Shinji, E-mail: Lavinia.heisenberg@googlemail.com, E-mail: r.kase@rs.tus.ac.jp, E-mail: shinji@rs.kagu.tus.ac.jp
In beyond-generalized Proca theories including the extension to theories higher than second order, we study the role of a spatial component v of a massive vector field on the anisotropic cosmological background. We show that, as in the case of the isotropic cosmological background, there is no additional ghostly degrees of freedom associated with the Ostrogradski instability. In second-order generalized Proca theories we find the existence of anisotropic solutions on which the ratio between the anisotropic expansion rate Σ and the isotropic expansion rate H remains nearly constant in the radiation-dominated epoch. In the regime where Σ/ H is constant,more » the spatial vector component v works as a dark radiation with the equation of state close to 1/3. During the matter era, the ratio Σ/ H decreases with the decrease of v . As long as the conditions |Σ| || H and v {sup 2} || φ{sup 2} are satisfied around the onset of late-time cosmic acceleration, where φ is the temporal vector component, we find that the solutions approach the isotropic de Sitter fixed point (Σ = 0 = v ) in accordance with the cosmic no-hair conjecture. In the presence of v and Σ the early evolution of the dark energy equation of state w {sub DE} in the radiation era is different from that in the isotropic case, but the approach to the isotropic value w {sub DE}{sup (iso)} typically occurs at redshifts z much larger than 1. Thus, apart from the existence of dark radiation, the anisotropic cosmological dynamics at low redshifts is similar to that in isotropic generalized Proca theories. In beyond-generalized Proca theories the only consistent solution to avoid the divergence of a determinant of the dynamical system corresponds to v = 0, so Σ always decreases in time.« less
An application of a zero-inflated lifetime distribution with multiple and incomplete data sources
Hamada, M. S.; Margevicius, K. J.
2016-02-11
In this study, we analyze data sampled from a population of parts in which an associated anomaly can occur at assembly or after assembly. Using a zero-inflated lifetime distribution to fit left-censored and right-censored data as well data from a supplementary sample, we make predictions about the proportion of the population with anomalies today and in the future. Goodness-of-fit is also addressed.
Another Velvet Revolution Implications of the 1989 Czech Velvet Revolution on Iran
2011-06-01
countries; “even censoring news from the Soviet Union, whose own period of glasnost precipitated all these gyrations.”1 Furthermore, the failure of the... America for having maliciously presented the report. For his action of passing along information to Western journalists on the reports of Smid‟s...their coverage of the demonstrations was censored . Video coverage of the demonstrations was often televised as a deterrence mechanism, meanwhile news
Counterinsurgency in Brazil: Lessons of the Fighting from 1968 to 1974
2010-04-12
system over almost all information disseminated in the press, theaters, movies and music. Government agents worked as censors inside press agencies...articles, letter of songs and scenes from movies that were judged as being subversive were suppressed by censors . Under the military instrument of national...the maintenance of its influence in Latin America Previous to the military coup d’etat on 31 March 1964, U. S. President Lyndon Johnson had already
Repression, Civil Conflict and Leadership Tenure: The Thai Case Study: 2006-2014
2015-05-30
peaceful protestors. The Army argues that it intervenes to prevent more violence and instability. The armed forces also censor the Internet making it...protestors . The Thai public responded negatively to violent repression, as did many of Thailand’s allies in Europe, Asia and North America . In the wake...of expression, blocking and shutting down websites and radio stations, and censoring the Internet. In addition, the new government banned gatherings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
Censored rainfall modelling for estimation of fine-scale extremes
NASA Astrophysics Data System (ADS)
Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro
2018-01-01
Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.
Maximum likelihood estimation for semiparametric transformation models with interval-censored data
Mao, Lu; Lin, D. Y.
2016-01-01
Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656
Hutson, Alan D
2018-01-01
In this note, we develop a new and novel semi-parametric estimator of the survival curve that is comparable to the product-limit estimator under very relaxed assumptions. The estimator is based on a beta parametrization that warps the empirical distribution of the observed censored and uncensored data. The parameters are obtained using a pseudo-maximum likelihood approach adjusting the survival curve accounting for the censored observations. In the univariate setting, the new estimator tends to better extend the range of the survival estimation given a high degree of censoring. However, the key feature of this paper is that we develop a new two-group semi-parametric exact permutation test for comparing survival curves that is generally superior to the classic log-rank and Wilcoxon tests and provides the best global power across a variety of alternatives. The new test is readily extended to the k group setting. PMID:26988931
Regression analysis of informative current status data with the additive hazards model.
Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo
2015-04-01
This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
Reviving the shear-free perfect fluid conjecture in general relativity
NASA Astrophysics Data System (ADS)
Sikhonde, Muzikayise E.; Dunsby, Peter K. S.
2017-12-01
Employing a Mathematica symbolic computer algebra package called xTensor, we present (1+3) -covariant special case proofs of the shear-free perfect fluid conjecture in general relativity. We first present the case where the pressure is constant, and where the acceleration is parallel to the vorticity vector. These cases were first presented in their covariant form by Senovilla et al. We then provide a covariant proof for the case where the acceleration and vorticity vectors are orthogonal, which leads to the existence of a Killing vector along the vorticity. This Killing vector satisfies the new constraint equations resulting from the vanishing of the shear. Furthermore, it is shown that in order for the conjecture to be true, this Killing vector must have a vanishing spatially projected directional covariant derivative along the velocity vector field. This in turn implies the existence of another basic vector field along the direction of the vorticity for the conjecture to hold. Finally, we show that in general, there exists a basic vector field parallel to the acceleration for which the conjecture is true.
Coupled skinny baker's maps and the Kaplan-Yorke conjecture
NASA Astrophysics Data System (ADS)
Gröger, Maik; Hunt, Brian R.
2013-09-01
The Kaplan-Yorke conjecture states that for ‘typical’ dynamical systems with a physical measure, the information dimension and the Lyapunov dimension coincide. We explore this conjecture in a neighborhood of a system for which the two dimensions do not coincide because the system consists of two uncoupled subsystems. We are interested in whether coupling ‘typically’ restores the equality of the dimensions. The particular subsystems we consider are skinny baker's maps, and we consider uni-directional coupling. For coupling in one of the possible directions, we prove that the dimensions coincide for a prevalent set of coupling functions, but for coupling in the other direction we show that the dimensions remain unequal for all coupling functions. We conjecture that the dimensions prevalently coincide for bi-directional coupling. On the other hand, we conjecture that the phenomenon we observe for a particular class of systems with uni-directional coupling, where the information and Lyapunov dimensions differ robustly, occurs more generally for many classes of uni-directionally coupled systems (also called skew-product systems) in higher dimensions.
Bartnik’s splitting conjecture and Lorentzian Busemann function
NASA Astrophysics Data System (ADS)
Amini, Roya; Sharifzadeh, Mehdi; Bahrampour, Yousof
2018-05-01
In 1988 Bartnik posed the splitting conjecture about the cosmological space-time. This conjecture has been proved by several people, with different approaches and by using some additional assumptions such as ‘S-ray condition’ and ‘level set condition’. It is known that the ‘S-ray condition’ yields the ‘level set condition’. We have proved that the two are indeed equivalent, by giving a different proof under the assumption of the ‘level set condition’. In addition, we have shown several properties of the cosmological space-time, under the presence of the ‘level set condition’. Finally we have provided a proof of the conjecture under a different assumption on the cosmological space-time. But we first prove some results without the timelike convergence condition which help us to state our proofs.
Comparison of Methods for Analyzing Left-Censored Occupational Exposure Data
Huynh, Tran; Ramachandran, Gurumurthy; Banerjee, Sudipto; Monteiro, Joao; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.
2014-01-01
The National Institute for Environmental Health Sciences (NIEHS) is conducting an epidemiologic study (GuLF STUDY) to investigate the health of the workers and volunteers who participated from April to December of 2010 in the response and cleanup of the oil release after the Deepwater Horizon explosion in the Gulf of Mexico. The exposure assessment component of the study involves analyzing thousands of personal monitoring measurements that were collected during this effort. A substantial portion of these data has values reported by the analytic laboratories to be below the limits of detection (LOD). A simulation study was conducted to evaluate three established methods for analyzing data with censored observations to estimate the arithmetic mean (AM), geometric mean (GM), geometric standard deviation (GSD), and the 95th percentile (X0.95) of the exposure distribution: the maximum likelihood (ML) estimation, the β-substitution, and the Kaplan–Meier (K-M) methods. Each method was challenged with computer-generated exposure datasets drawn from lognormal and mixed lognormal distributions with sample sizes (N) varying from 5 to 100, GSDs ranging from 2 to 5, and censoring levels ranging from 10 to 90%, with single and multiple LODs. Using relative bias and relative root mean squared error (rMSE) as the evaluation metrics, the β-substitution method generally performed as well or better than the ML and K-M methods in most simulated lognormal and mixed lognormal distribution conditions. The ML method was suitable for large sample sizes (N ≥ 30) up to 80% censoring for lognormal distributions with small variability (GSD = 2–3). The K-M method generally provided accurate estimates of the AM when the censoring was <50% for lognormal and mixed distributions. The accuracy and precision of all methods decreased under high variability (GSD = 4 and 5) and small to moderate sample sizes (N < 20) but the β-substitution was still the best of the three methods. When using the ML method, practitioners are cautioned to be aware of different ways of estimating the AM as they could lead to biased interpretation. A limitation of the β-substitution method is the absence of a confidence interval for the estimate. More research is needed to develop methods that could improve the estimation accuracy for small sample sizes and high percent censored data and also provide uncertainty intervals. PMID:25261453
Eisenstein Hecke algebras and Iwasawa theory
NASA Astrophysics Data System (ADS)
Wake, Preston
We show that if an Eisenstein component of the p-adic Hecke algebra associated to modular forms is Gorenstein, then it is necessary that the plus-part of a certain ideal class group is trivial. We also show that this condition is sufficient whenever a conjecture of Sharifi holds. We also formulate a weaker Gorenstein property, and show that this weak Gorenstein property holds if and only if a weak form of Sharifi's conjecture and a weak form of Greenberg's conjecture hold.
Matrix Models and A Proof of the Open Analog of Witten's Conjecture
NASA Astrophysics Data System (ADS)
Buryak, Alexandr; Tessler, Ran J.
2017-08-01
In a recent work, R. Pandharipande, J. P. Solomon and the second author have initiated a study of the intersection theory on the moduli space of Riemann surfaces with boundary. They conjectured that the generating series of the intersection numbers satisfies the open KdV equations. In this paper we prove this conjecture. Our proof goes through a matrix model and is based on a Kontsevich type combinatorial formula for the intersection numbers that was found by the second author.
Maldacena, Juan; Shenker, Stephen H.; Stanford, Douglas
2016-08-17
We conjecture a sharp bound on the rate of growth of chaos in thermal quantum systems with a large number of degrees of freedom. Chaos can be diagnosed using an out-of-time-order correlation function closely related to the commutator of operators separated in time. We conjecture that the influence of chaos on this correlator can develop no faster than exponentially, with Lyapunov exponent λ L ≤ 2πk B T/ℏ. We give a precise mathematical argument, based on plausible physical assumptions, establishing this conjecture.
Methodological issues underlying multiple decrement life table analysis.
Mode, C J; Avery, R C; Littman, G S; Potter, R G
1977-02-01
In this paper, the actuarial method of multiple decrement life table analysis of censored, longitudinal data is examined. The discussion is organized in terms of the first segment of usage of an intrauterine device. Weaknesses of the actuarial approach are pointed out, and an alternative approach, based on the classical model of competing risks, is proposed. Finally, the actuarial and the alternative method of analyzing censored data are compared, using data from the Taichung Medical Study on Intrauterine Devices.
Strategic Studies Quarterly. Volume 5, Number 1, Spring 2011
2011-01-01
2010). 17. The White House, National Space Policy of the United States of America (Washington: White House, 28 June 2010), 3. 18. John Oneal and Bruce... censors and vigilantes) model operating on many levels at once. In this model, China is expressing a long-standing concern for the stability and...Ansfield, “China’s Censors Tackle and Trip Over the Internet,” New York Times, 8 April 2010. 32. Ching Cheong, “Fighting the Digital War with the
Baker, Ronald J.; Chepiga, Mary M.; Cauller, Stephen J.
2015-01-01
The Kaplan-Meier method of estimating summary statistics from left-censored data was applied in order to include nondetects (left-censored data) in median nitrate-concentration calculations. Median concentrations also were determined using three alternative methods of handling nondetects. Treatment of the 23 percent of samples that were nondetects had little effect on estimated median nitrate concentrations because method detection limits were mostly less than median values.
Outcome-Dependent Sampling with Interval-Censored Failure Time Data
Zhou, Qingning; Cai, Jianwen; Zhou, Haibo
2017-01-01
Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664
A Bayesian model for time-to-event data with informative censoring
Kaciroti, Niko A.; Raghunathan, Trivellore E.; Taylor, Jeremy M. G.; Julius, Stevo
2012-01-01
Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval (tk−1,tk], conditional on being at risk at tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension. PMID:22223746
Chai, Hua; Li, Zi-Na; Meng, De-Yu; Xia, Liang-Yong; Liang, Yong
2017-10-12
Gene selection is an attractive and important task in cancer survival analysis. Most existing supervised learning methods can only use the labeled biological data, while the censored data (weakly labeled data) far more than the labeled data are ignored in model building. Trying to utilize such information in the censored data, a semi-supervised learning framework (Cox-AFT model) combined with Cox proportional hazard (Cox) and accelerated failure time (AFT) model was used in cancer research, which has better performance than the single Cox or AFT model. This method, however, is easily affected by noise. To alleviate this problem, in this paper we combine the Cox-AFT model with self-paced learning (SPL) method to more effectively employ the information in the censored data in a self-learning way. SPL is a kind of reliable and stable learning mechanism, which is recently proposed for simulating the human learning process to help the AFT model automatically identify and include samples of high confidence into training, minimizing interference from high noise. Utilizing the SPL method produces two direct advantages: (1) The utilization of censored data is further promoted; (2) the noise delivered to the model is greatly decreased. The experimental results demonstrate the effectiveness of the proposed model compared to the traditional Cox-AFT model.
Model Checking Failed Conjectures in Theorem Proving: A Case Study
NASA Technical Reports Server (NTRS)
Pike, Lee; Miner, Paul; Torres-Pomales, Wilfredo
2004-01-01
Interactive mechanical theorem proving can provide high assurance of correct design, but it can also be a slow iterative process. Much time is spent determining why a proof of a conjecture is not forthcoming. In some cases, the conjecture is false and in others, the attempted proof is insufficient. In this case study, we use the SAL family of model checkers to generate a concrete counterexample to an unproven conjecture specified in the mechanical theorem prover, PVS. The focus of our case study is the ROBUS Interactive Consistency Protocol. We combine the use of a mechanical theorem prover and a model checker to expose a subtle flaw in the protocol that occurs under a particular scenario of faults and processor states. Uncovering the flaw allows us to mend the protocol and complete its general verification in PVS.
Betti numbers of graded modules and cohomology of vector bundles
NASA Astrophysics Data System (ADS)
Eisenbud, David; Schreyer, Frank-Olaf
2009-07-01
In the remarkable paper Graded Betti numbers of Cohen-Macaulay modules and the multiplicity conjecture, Mats Boij and Jonas Soederberg conjectured that the Betti table of a Cohen-Macaulay module over a polynomial ring is a positive linear combination of Betti tables of modules with pure resolutions. We prove a strengthened form of their conjectures. Applications include a proof of the Multiplicity Conjecture of Huneke and Srinivasan and a proof of the convexity of a fan naturally associated to the Young lattice. With the same tools we show that the cohomology table of any vector bundle on projective space is a positive rational linear combination of the cohomology tables of what we call supernatural vector bundles. Using this result we give new bounds on the slope of a vector bundle in terms of its cohomology.
Machine learning in the string landscape
NASA Astrophysics Data System (ADS)
Carifio, Jonathan; Halverson, James; Krioukov, Dmitri; Nelson, Brent D.
2017-09-01
We utilize machine learning to study the string landscape. Deep data dives and conjecture generation are proposed as useful frameworks for utilizing machine learning in the landscape, and examples of each are presented. A decision tree accurately predicts the number of weak Fano toric threefolds arising from reflexive polytopes, each of which determines a smooth F-theory compactification, and linear regression generates a previously proven conjecture for the gauge group rank in an ensemble of 4/3× 2.96× {10}^{755} F-theory compactifications. Logistic regression generates a new conjecture for when E 6 arises in the large ensemble of F-theory compactifications, which is then rigorously proven. This result may be relevant for the appearance of visible sectors in the ensemble. Through conjecture generation, machine learning is useful not only for numerics, but also for rigorous results.
The effect of hospital care on early survival after penetrating trauma.
Clark, David E; Doolittle, Peter C; Winchell, Robert J; Betensky, Rebecca A
2014-12-01
The effectiveness of emergency medical interventions can be best evaluated using time-to-event statistical methods with time-varying covariates (TVC), but this approach is complicated by uncertainty about the actual times of death. We therefore sought to evaluate the effect of hospital intervention on mortality after penetrating trauma using a method that allowed for interval censoring of the precise times of death. Data on persons with penetrating trauma due to interpersonal assault were combined from the 2008 to 2010 National Trauma Data Bank (NTDB) and the 2004 to 2010 National Violent Death Reporting System (NVDRS). Cox and Weibull proportional hazards models for survival time (t SURV ) were estimated, with TVC assumed to have constant effects for specified time intervals following hospital arrival. The Weibull model was repeated with t SURV interval-censored to reflect uncertainty about the precise times of death, using an imputation method to accommodate interval censoring along with TVC. All models showed that mortality was increased by older age, female sex, firearm mechanism, and injuries involving the head/neck or trunk. Uncensored models showed a paradoxical increase in mortality associated with the first hour in a hospital. The interval-censored model showed that mortality was markedly reduced after admission to a hospital, with a hazard ratio (HR) of 0.68 (95% CI 0.63, 0.73) during the first 30 min declining to a HR of 0.01 after 120 min. Admission to a verified level I trauma center (compared to other hospitals in the NTDB) was associated with a further reduction in mortality, with a HR of 0.93 (95% CI 0.82, 0.97). Time-to-event models with TVC and interval censoring can be used to estimate the effect of hospital care on early mortality after penetrating trauma or other acute medical conditions and could potentially be used for interhospital comparisons.
Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J
2016-06-01
Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.
A note on 4D heterotic string vacua, FI-terms and the swampland
NASA Astrophysics Data System (ADS)
Aldazabal, Gerardo; Ibáñez, Luis E.
2018-07-01
We present a conjecture for the massless sector of perturbative 4D N = 1 heterotic (0 , 2) string vacua, including U(1) n gauge symmetries, one of them possibly anomalous (like in standard heterotic compactifications). Mathematically it states that the positive hull generated by the charges of the massless chiral multiplets spans a sublattice of the full charge lattice. We have tested this conjecture in many heterotic N = 1 compactifications in 4D. Our motivation for this conjecture is that it allows to understand a very old puzzle in (0 , 2) N = 1 heterotic compactification with an anomalous U (1). The conjecture guarantees that there is always a D-flat direction cancelling the FI-term and restoring N = 1 SUSY in a nearby vacuum. This is something that has being verified in the past in a large number of cases, but whose origin has remained obscure for decades. We argue that the existence of a lattice generated by massless states guarantees the instability of heterotic non-BPS extremal blackholes, as required by Weak Gravity Conjecture arguments. Thus the pervasive existence of these nearby FI-cancelling vacua would be connected with WGC arguments.
Gravitational entropy and the cosmological no-hair conjecture
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof
2018-04-01
The gravitational entropy and no-hair conjectures seem to predict contradictory future states of our Universe. The growth of the gravitational entropy is associated with the growth of inhomogeneity, while the no-hair conjecture argues that a universe dominated by dark energy should asymptotically approach a homogeneous and isotropic de Sitter state. The aim of this paper is to study these two conjectures. The investigation is based on the Simsilun simulation, which simulates the universe using the approximation of the Silent Universe. The Silent Universe is a solution to the Einstein equations that assumes irrotational, nonviscous, and insulated dust, with vanishing magnetic part of the Weyl curvature. The initial conditions for the Simsilun simulation are sourced from the Millennium simulation, which results with a realistically appearing but relativistic at origin simulation of a universe. The Simsilun simulation is evolved from the early universe (t =25 Myr ) until far future (t =1000 Gyr ). The results of this investigation show that both conjectures are correct. On global scales, a universe with a positive cosmological constant and nonpositive spatial curvature does indeed approach the de Sitter state. At the same time it keeps generating the gravitational entropy.
MEDIAN-BASED INCREMENTAL COST-EFFECTIVENESS RATIOS WITH CENSORED DATA
Bang, Heejung; Zhao, Hongwei
2016-01-01
Cost-effectiveness is an essential part of treatment evaluation, in addition to effectiveness. In the cost-effectiveness analysis, a measure called the incremental cost-effectiveness ratio (ICER) is widely utilized, and the mean cost and the mean (quality-adjusted) life years have served as norms to summarize cost and effectiveness for a study population. Recently, the median-based ICER was proposed for complementary or sensitivity analysis purposes. In this paper, we extend this method when some data are censored. PMID:26010599
Prevalence Incidence Mixture Models
The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.
A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2012-01-01
This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659
Brown-York quasilocal energy in Lanczos-Lovelock gravity and black hole horizons
NASA Astrophysics Data System (ADS)
Chakraborty, Sumanta; Dadhich, Naresh
2015-12-01
A standard candidate for quasilocal energy in general relativity is the Brown-York energy, which is essentially a two dimensional surface integral of the extrinsic curvature on the two-boundary of a spacelike hypersurface referenced to flat spacetime. Several years back one of us had conjectured that the black hole horizon is defined by equipartition of gravitational and non-gravitational energy. By employing the above definition of quasilocal Brown-York energy, we have verified the equipartition conjecture for static charged and charged axi-symmetric black holes in general relativity. We have further generalized the Brown-York formalism to all orders in Lanczos-Lovelock theories of gravity and have verified the conjecture for pure Lovelock charged black hole in all even d = 2 m + 2 dimensions, where m is the degree of Lovelock action. It turns out that the equipartition conjecture works only for pure Lovelock, and not for Einstein-Lovelock black holes.
Small black holes and near-extremal CFTs
Benjamin, Nathan; Dyer, Ethan; Fitzpatrick, A. Liam; ...
2016-08-02
Pure theories of AdS 3 quantum gravity are conjectured to be dual to CFTs with sparse spectra of light primary operators. The sparsest possible spectrum consistent with modular invariance includes only black hole states above the vacuum. Witten conjectured the existence of a family of extremal CFTs, which realize this spectrum for all admissible values of the central charge. We consider the quantum corrections to the classical spectrum, and propose a specific modification of Witten’s conjecture which takes into account the existence of “small” black hole states. These have zero classical horizon area, with a calculable entropy attributed solely tomore » loop effects. Lastly, our conjecture passes various consistency checks, especially when generalized to include theories with supersymmetry. In theories with N = 2 supersymmetry, this “near-extremal CFT” proposal precisely evades the no-go results of Gaberdiel et al.« less
Gillaizeau, Florence; Sénage, Thomas; Le Borgne, Florent; Le Tourneau, Thierry; Roussel, Jean-Christian; Leffondrè, Karen; Porcher, Raphaël; Giraudeau, Bruno; Dantan, Etienne; Foucher, Yohann
2018-04-15
Multistate models with interval-censored data, such as the illness-death model, are still not used to any considerable extent in medical research regardless of the significant literature demonstrating their advantages compared to usual survival models. Possible explanations are their uncommon availability in classical statistical software or, when they are available, by the limitations related to multivariable modelling to take confounding into consideration. In this paper, we propose a strategy based on propensity scores that allows population causal effects to be estimated: the inverse probability weighting in the illness semi-Markov model with interval-censored data. Using simulated data, we validated the performances of the proposed approach. We also illustrated the usefulness of the method by an application aiming to evaluate the relationship between the inadequate size of an aortic bioprosthesis and its degeneration or/and patient death. We have updated the R package multistate to facilitate the future use of this method. Copyright © 2017 John Wiley & Sons, Ltd.
Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn
2016-12-01
We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.
Sun, Libo; Wan, Ying
2018-04-22
Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.
Wu, Cai; Li, Liang
2018-05-15
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.
Wei, Shaoceng; Kryscio, Richard J.
2015-01-01
Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk (Figure 1). Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript we apply a Semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. PMID:24821001
Wei, Shaoceng; Kryscio, Richard J
2016-12-01
Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.
Multivariate longitudinal data analysis with censored and intermittent missing responses.
Lin, Tsung-I; Lachos, Victor H; Wang, Wan-Lun
2018-05-08
The multivariate linear mixed model (MLMM) has emerged as an important analytical tool for longitudinal data with multiple outcomes. However, the analysis of multivariate longitudinal data could be complicated by the presence of censored measurements because of a detection limit of the assay in combination with unavoidable missing values arising when subjects miss some of their scheduled visits intermittently. This paper presents a generalization of the MLMM approach, called the MLMM-CM, for a joint analysis of the multivariate longitudinal data with censored and intermittent missing responses. A computationally feasible expectation maximization-based procedure is developed to carry out maximum likelihood estimation within the MLMM-CM framework. Moreover, the asymptotic standard errors of fixed effects are explicitly obtained via the information-based method. We illustrate our methodology by using simulated data and a case study from an AIDS clinical trial. Experimental results reveal that the proposed method is able to provide more satisfactory performance as compared with the traditional MLMM approach. Copyright © 2018 John Wiley & Sons, Ltd.
Cohn, Timothy A.
2005-01-01
This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.
Costs of cervical cancer treatment: population-based estimates from Ontario
Pendrith, C.; Thind, A.; Zaric, G.S.; Sarma, S.
2016-01-01
Objectives The objectives of the present study were to estimate the overall and specific medical care costs associated with cervical cancer in the first 5 years after diagnosis in Ontario. Methods Incident cases of invasive cervical cancer during 2007–2010 were identified from the Ontario Cancer Registry and linked to administrative databases held at the Institute for Clinical Evaluative Sciences. Mean costs in 2010 Canadian dollars were estimated using the arithmetic mean and estimators that adjust for censored data. Results Mean age of the patients in the study cohort (779 cases) was 49.3 years. The mean overall medical care cost was $39,187 [standard error (se): $1,327] in the 1st year after diagnosis. Costs in year 1 ranged from $34,648 (se: $1,275) for those who survived at least 1 year to $69,142 (se: $4,818) for those who died from cervical cancer within 1 year. At 5 years after diagnosis, the mean overall unadjusted cost was $63,131 (se: $3,131), and the cost adjusted for censoring was $68,745 (se: $2,963). Inpatient hospitalizations and cancer-related care were the two largest components of cancer treatment costs. Conclusions We found that the estimated mean costs that did not account for censoring were consistently undervalued, highlighting the importance of estimates based on censoring-adjusted costs in cervical cancer. Our results are reliable for estimating the economic burden of cervical cancer and the cost-effectiveness of cervical cancer prevention strategies. PMID:27122978
Community drinking water quality monitoring data: utility for public health research and practice.
Jones, Rachael M; Graber, Judith M; Anderson, Robert; Rockne, Karl; Turyk, Mary; Stayner, Leslie T
2014-01-01
Environmental Public Health Tracking (EPHT) tracks the occurrence and magnitude of environmental hazards and associated adverse health effects over time. The EPHT program has formally expanded its scope to include finished drinking water quality. Our objective was to describe the features, strengths, and limitations of using finished drinking water quality data from community water systems (CWSs) for EPHT applications, focusing on atrazine and nitrogen compounds in 8 Midwestern states. Water quality data were acquired after meeting with state partners and reviewed and merged for analysis. Data and the coding of variables, particularly with respect to censored results (nondetects), were not standardized between states. Monitoring frequency varied between CWSs and between atrazine and nitrates, but this was in line with regulatory requirements. Cumulative distributions of all contaminants were not the same in all states (Peto-Prentice test P < .001). Atrazine results were highly censored in all states (76.0%-99.3%); higher concentrations were associated with increased measurement frequency and surface water as the CWS source water type. Nitrate results showed substantial state-to-state variability in censoring (20.5%-100%) and in associations between concentrations and the CWS source water type. Statistical analyses of these data are challenging due to high rates of censoring and uncertainty about the appropriateness of parametric assumptions for time-series data. Although monitoring frequency was consistent with regulations, the magnitude of time gaps coupled with uncertainty about CWS service areas may limit linkage with health outcome data.
Variable selection in a flexible parametric mixture cure model with interval-censored data.
Scolas, Sylvie; El Ghouch, Anouar; Legrand, Catherine; Oulhaj, Abderrahim
2016-03-30
In standard survival analysis, it is generally assumed that every individual will experience someday the event of interest. However, this is not always the case, as some individuals may not be susceptible to this event. Also, in medical studies, it is frequent that patients come to scheduled interviews and that the time to the event is only known to occur between two visits. That is, the data are interval-censored with a cure fraction. Variable selection in such a setting is of outstanding interest. Covariates impacting the survival are not necessarily the same as those impacting the probability to experience the event. The objective of this paper is to develop a parametric but flexible statistical model to analyze data that are interval-censored and include a fraction of cured individuals when the number of potential covariates may be large. We use the parametric mixture cure model with an accelerated failure time regression model for the survival, along with the extended generalized gamma for the error term. To overcome the issue of non-stable and non-continuous variable selection procedures, we extend the adaptive LASSO to our model. By means of simulation studies, we show good performance of our method and discuss the behavior of estimates with varying cure and censoring proportion. Lastly, our proposed method is illustrated with a real dataset studying the time until conversion to mild cognitive impairment, a possible precursor of Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Martin, Emma C; Aarons, Leon; Yates, James W T
2016-07-01
Xenograft studies are commonly used to assess the efficacy of new compounds and characterise their dose-response relationship. Analysis often involves comparing the final tumour sizes across dose groups. This can cause bias, as often in xenograft studies a tumour burden limit (TBL) is imposed for ethical reasons, leading to the animals with the largest tumours being excluded from the final analysis. This means the average tumour size, particularly in the control group, is underestimated, leading to an underestimate of the treatment effect. Four methods to account for dropout due to the TBL are proposed, which use all the available data instead of only final observations: modelling, pattern mixture models, treating dropouts as censored using the M3 method and joint modelling of tumour growth and dropout. The methods were applied to both a simulated data set and a real example. All four proposed methods led to an improvement in the estimate of treatment effect in the simulated data. The joint modelling method performed most strongly, with the censoring method also providing a good estimate of the treatment effect, but with higher uncertainty. In the real data example, the dose-response estimated using the censoring and joint modelling methods was higher than the very flat curve estimated from average final measurements. Accounting for dropout using the proposed censoring or joint modelling methods allows the treatment effect to be recovered in studies where it may have been obscured due to dropout caused by the TBL.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.
Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N
2016-01-01
Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
Spectral evolution of active galactic nuclei: A unified description of the X-ray and gamma
NASA Technical Reports Server (NTRS)
Leiter, D.; Boldt, E.
1982-01-01
A model for spectral evolution is presented whereby active galactic nuclei (AGN) of the type observed individually emerge from an earlier stage at z approx = 4 in which they are the thermal X-ray sources responsible for most of the cosmic X-ray background (CXB). The conjecture is pursued that these precursor objects are initially supermassive Schwarzschild black holes with accretion disks radiating near the Eddington luminosity limit. It is noted that after approx. 10 to the 8th power years these central black holes are spun-up to a canonical Kerr equilibrium state (A/M = 0.998; Thorne 1974) and shown how they then can lead to spectral evolution involving non-thermal emission extending to gamma rays, at the expense of reduced thermal disk radiation. That major portion of the CXB remaining after the contribution of usual AGN are considered, while a superposition of AGN sources at z 1 can account for the gamma ray background. Extensive X-ray measurements carried out with the HEAO 1 and 2 missions as well as gamma ray and optical data are shown to compare favorably with principal features of this model.
Conjectures on the relations of linking and causality in causally simple spacetimes
NASA Astrophysics Data System (ADS)
Chernov, Vladimir
2018-05-01
We formulate the generalization of the Legendrian Low conjecture of Natario and Tod (proved by Nemirovski and myself before) to the case of causally simple spacetimes. We prove a weakened version of the corresponding statement. In all known examples, a causally simple spacetime can be conformally embedded as an open subset into some globally hyperbolic and the space of light rays in is an open submanifold of the space of light rays in . If this is always the case, this provides an approach to solving the conjectures relating causality and linking in causally simple spacetimes.
Doran-Harder-Thompson Conjecture via SYZ Mirror Symmetry: Elliptic Curves
NASA Astrophysics Data System (ADS)
Kanazawa, Atsushi
2017-04-01
We prove the Doran-Harder-Thompson conjecture in the case of elliptic curves by using ideas from SYZ mirror symmetry. The conjecture claims that when a Calabi-Yau manifold X degenerates to a union of two quasi-Fano manifolds (Tyurin degeneration), a mirror Calabi-Yau manifold of X can be constructed by gluing the two mirror Landau-Ginzburg models of the quasi-Fano manifolds. The two crucial ideas in our proof are to obtain a complex structure by gluing the underlying affine manifolds and to construct the theta functions from the Landau-Ginzburg superpotentials.
Symmetric moment problems and a conjecture of Valent
NASA Astrophysics Data System (ADS)
Berg, C.; Szwarc, R.
2017-03-01
In 1998 Valent made conjectures about the order and type of certain indeterminate Stieltjes moment problems associated with birth and death processes which have polynomial birth and death rates of degree {p≥slant 3}. Romanov recently proved that the order is 1/p as conjectured. We prove that the type with respect to the order is related to certain multi-zeta values and that this type belongs to the interval which also contains the conjectured value. This proves that the conjecture about type is asymptotically correct as p\\to∞. The main idea is to obtain estimates for order and type of symmetric indeterminate Hamburger moment problems when the orthonormal polynomials P_n and those of the second kind Q_n satisfy P2n^2(0)∼ c_1n-1/β and Q2n-1^2(0)∼ c2 n-1/α, where 0<α,β<1 may be different, and c_1 and c_2 are positive constants. In this case the order of the moment problem is majorized by the harmonic mean of α and β. Here α_n∼ β_n means that α_n/β_n\\to 1. This also leads to a new proof of Romanov's Theorem that the order is 1/p. Bibliography: 19 titles.
Empirical likelihood-based confidence intervals for mean medical cost with censored data.
Jeyarajah, Jenny; Qin, Gengsheng
2017-11-10
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.
American Mathematics from 1940 to the Day Before Yesterday
ERIC Educational Resources Information Center
Ewing, J. H.; And Others
1976-01-01
Ten recent results in pure mathematics are described, covering the continuum hypothesis, Diophantine equations, simple groups, resolution of singularities, Weil conjectures, Lie groups, Poincare conjecture, exotic spheres, differential equations, and the index theorem. Proofs are omitted, but references are provided. (DT)
Non-censored rib fracture data during frontal PMHS sled tests.
Kemper, Andrew R; Beeman, Stephanie M; Porta, David J; Duma, Stefan M
2016-09-01
The purpose of this study was to obtain non-censored rib fracture data due to three-point belt loading during dynamic frontal post-mortem human surrogate (PMHS) sled tests. The PMHS responses were then compared to matched tests performed using the Hybrid-III 50(th) percentile male ATD. Matched dynamic frontal sled tests were performed on two male PMHSs, which were approximately 50(th) percentile height and weight, and the Hybrid-III 50(th) percentile male ATD. The sled pulse was designed to match the vehicle acceleration of a standard sedan during a FMVSS-208 40 kph test. Each subject was restrained with a 4 kN load limiting, driver-side, three-point seatbelt. A 59-channel chestband, aligned at the nipple line, was used to quantify the chest contour, anterior-posterior sternum deflection, and maximum anterior-posterior chest deflection for all test subjects. The internal sternum deflection of the ATD was quantified with the sternum potentiometer. For the PMHS tests, a total of 23 single-axis strain gages were attached to the bony structures of the thorax, including the ribs, sternum, and clavicle. In order to create a non-censored data set, the time history of each strain gage was analyzed to determine the timing of each rib fracture and corresponding timing of each AIS level (AIS = 1, 2, 3, etc.) with respect to chest deflection. Peak sternum deflection for PMHS 1 and PMHS 2 were 48.7 mm (19.0%) and 36.7 mm (12.2%), respectively. The peak sternum deflection for the ATD was 20.8 mm when measured by the chest potentiometer and 34.4 mm (12.0%) when measured by the chestband. Although the measured ATD sternum deflections were found to be well below the current thoracic injury criterion (63 mm) specified for the ATD in FMVSS-208, both PMHSs sustained AIS 3+ thoracic injuries. For all subjects, the maximum chest deflection measured by the chestband occurred to the right of the sternum and was found to be 83.0 mm (36.0%) for PMHS 1, 60.6 mm (23.9%) for PMHS 2, and 56.3 mm (20.0%) for the ATD. The non-censored rib fracture data in the current study (n = 2 PMHS) in conjunction with the non-censored rib fracture data from two previous table-top studies (n = 4 PMHS) show that AIS 3+ injury timing occurs prior to peak sternum compression, prior to peak maximum chest compression, and at lower compressions than might be suggested by current PMHS thoracic injury criteria developed using censored rib fracture data. In addition, the maximum chest deflection results showed a more reasonable correlation between deflection, rib fracture timing, and injury severity than sternum deflection. Overall, these data provide compelling empirical evidence that suggests a more conservative thoracic injury criterion could potentially be developed based on non-censored rib fracture data with additional testing performed over a wider range of subjects and loading conditions.
Bonn, Bernadine A.
2008-01-01
A long-term method detection level (LT-MDL) and laboratory reporting level (LRL) are used by the U.S. Geological Survey?s National Water Quality Laboratory (NWQL) when reporting results from most chemical analyses of water samples. Changing to this method provided data users with additional information about their data and often resulted in more reported values in the low concentration range. Before this method was implemented, many of these values would have been censored. The use of the LT-MDL and LRL presents some challenges for the data user. Interpreting data in the low concentration range increases the need for adequate quality assurance because even small contamination or recovery problems can be relatively large compared to concentrations near the LT-MDL and LRL. In addition, the definition of the LT-MDL, as well as the inclusion of low values, can result in complex data sets with multiple censoring levels and reported values that are less than a censoring level. Improper interpretation or statistical manipulation of low-range results in these data sets can result in bias and incorrect conclusions. This document is designed to help data users use and interpret data reported with the LTMDL/ LRL method. The calculation and application of the LT-MDL and LRL are described. This document shows how to extract statistical information from the LT-MDL and LRL and how to use that information in USGS investigations, such as assessing the quality of field data, interpreting field data, and planning data collection for new projects. A set of 19 detailed examples are included in this document to help data users think about their data and properly interpret lowrange data without introducing bias. Although this document is not meant to be a comprehensive resource of statistical methods, several useful methods of analyzing censored data are demonstrated, including Regression on Order Statistics and Kaplan-Meier Estimation. These two statistical methods handle complex censored data sets without resorting to substitution, thereby avoiding a common source of bias and inaccuracy.
Cure rate model with interval censored data.
Kim, Yang-Jin; Jhun, Myoungshic
2008-01-15
In cancer trials, a significant fraction of patients can be cured, that is, the disease is completely eliminated, so that it never recurs. In general, treatments are developed to both increase the patients' chances of being cured and prolong the survival time among non-cured patients. A cure rate model represents a combination of cure fraction and survival model, and can be applied to many clinical studies over several types of cancer. In this article, the cure rate model is considered in the interval censored data composed of two time points, which include the event time of interest. Interval censored data commonly occur in the studies of diseases that often progress without symptoms, requiring clinical evaluation for detection (Encyclopedia of Biostatistics. Wiley: New York, 1998; 2090-2095). In our study, an approximate likelihood approach suggested by Goetghebeur and Ryan (Biometrics 2000; 56:1139-1144) is used to derive the likelihood in interval censored data. In addition, a frailty model is introduced to characterize the association between the cure fraction and survival model. In particular, the positive association between the cure fraction and the survival time is incorporated by imposing a common normal frailty effect. The EM algorithm is used to estimate parameters and a multiple imputation based on the profile likelihood is adopted for variance estimation. The approach is applied to the smoking cessation study in which the event of interest is a smoking relapse and several covariates including an intensive care treatment are evaluated to be effective for both the occurrence of relapse and the non-smoking duration. Copyright (c) 2007 John Wiley & Sons, Ltd.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Semiparametric regression analysis of failure time data with dependent interval censoring.
Chen, Chyong-Mei; Shen, Pao-Sheng
2017-09-20
Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Analysis of survival in breast cancer patients by using different parametric models
NASA Astrophysics Data System (ADS)
Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti
2017-09-01
In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.
Art, science and conjecture, from Hippocrates to Plato and Aristotle.
Boudon-Millot, Véronique
2005-01-01
This paper attempts to study the notion of stochazesthai in the Hippocratic Corpus in relation to Hippocratic reflections on the status of the medical art. Considering the passages where the verb stochazesthai is employed, we can see that this word is not yet synonymous with the term "conjecture". The main point of interest are the relations between the Hippocratic writings and the relevant works of Plato and Aristotle. In revising the concept of stochazesthai in this way, it appears that this "conjectural" mode of knowledge was unknown to the Hippocratic writers and that it is really too early in their case to speak of "stochastic medicine".
Multicritical points for spin-glass models on hierarchical lattices.
Ohzeki, Masayuki; Nishimori, Hidetoshi; Berker, A Nihat
2008-06-01
The locations of multicritical points on many hierarchical lattices are numerically investigated by the renormalization group analysis. The results are compared with an analytical conjecture derived by using the duality, the gauge symmetry, and the replica method. We find that the conjecture does not give the exact answer but leads to locations slightly away from the numerically reliable data. We propose an improved conjecture to give more precise predictions of the multicritical points than the conventional one. This improvement is inspired by a different point of view coming from the renormalization group and succeeds in deriving very consistent answers with many numerical data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvitis, Leonid
2009-01-01
An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.
A method for analyzing clustered interval-censored data based on Cox's model.
Kor, Chew-Teng; Cheng, Kuang-Fu; Chen, Yi-Hau
2013-02-28
Methods for analyzing interval-censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval-censored data. Our method is based on Cox's proportional hazard model with piecewise-constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family-based cohort study of pandemic H1N1 influenza in Taiwan during 2009-2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.
Johnson, Brent A
2009-10-01
We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.
Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie
2016-09-01
The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).
Automatic Censoring CFAR Detector Based on Ordered Data Difference for Low-Flying Helicopter Safety
Jiang, Wen; Huang, Yulin; Yang, Jianyu
2016-01-01
Being equipped with a millimeter-wave radar allows a low-flying helicopter to sense the surroundings in real time, which significantly increases its safety. However, nonhomogeneous clutter environments, such as a multiple target situation and a clutter edge environment, can dramatically affect the radar signal detection performance. In order to improve the radar signal detection performance in nonhomogeneous clutter environments, this paper proposes a new automatic censored cell averaging CFAR detector. The proposed CFAR detector does not require any prior information about the background environment and uses the hypothesis test of the first-order difference (FOD) result of ordered data to reject the unwanted samples in the reference window. After censoring the unwanted ranked cells, the remaining samples are combined to form an estimate of the background power level, thus getting better radar signal detection performance. The simulation results show that the FOD-CFAR detector provides low loss CFAR performance in a homogeneous environment and also performs robustly in nonhomogeneous environments. Furthermore, the measured results of a low-flying helicopter validate the basic performance of the proposed method. PMID:27399714
Frey, H Christopher; Zhao, Yuchao
2004-11-15
Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
Modeling absolute differences in life expectancy with a censored skew-normal regression approach
Clough-Gorr, Kerri; Zwahlen, Marcel
2015-01-01
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544
“Smooth” Semiparametric Regression Analysis for Arbitrarily Censored Time-to-Event Data
Zhang, Min; Davidian, Marie
2008-01-01
Summary A general framework for regression analysis of time-to-event data subject to arbitrary patterns of censoring is proposed. The approach is relevant when the analyst is willing to assume that distributions governing model components that are ordinarily left unspecified in popular semiparametric regression models, such as the baseline hazard function in the proportional hazards model, have densities satisfying mild “smoothness” conditions. Densities are approximated by a truncated series expansion that, for fixed degree of truncation, results in a “parametric” representation, which makes likelihood-based inference coupled with adaptive choice of the degree of truncation, and hence flexibility of the model, computationally and conceptually straightforward with data subject to any pattern of censoring. The formulation allows popular models, such as the proportional hazards, proportional odds, and accelerated failure time models, to be placed in a common framework; provides a principled basis for choosing among them; and renders useful extensions of the models straightforward. The utility and performance of the methods are demonstrated via simulations and by application to data from time-to-event studies. PMID:17970813
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation
Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-01-01
Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182
Deductive derivation and turing-computerization of semiparametric efficient estimation.
Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-12-01
Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. © 2015, The International Biometric Society.
Mathematical Observations: The Genesis of Mathematical Discovery in the Classroom
ERIC Educational Resources Information Center
Beaugris, Louis M.
2013-01-01
In his "Proofs and Refutations," Lakatos identifies the "Primitive Conjecture" as the first stage in the pattern of mathematical discovery. In this article, I am interested in ways of reaching the "Primitive Conjecture" stage in an undergraduate classroom. I adapted Realistic Mathematics Education methods in an…
Matter Gravitates, but Does Gravity Matter?
ERIC Educational Resources Information Center
Groetsch, C. W.
2011-01-01
The interplay of physical intuition, computational evidence, and mathematical rigor in a simple trajectory model is explored. A thought experiment based on the model is used to elicit student conjectures on the influence of a physical parameter; a mathematical model suggests a computational investigation of the conjectures, and rigorous analysis…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giovannetti, Vittorio; Lloyd, Seth; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139
The Amosov-Holevo-Werner conjecture implies the additivity of the minimum Renyi entropies at the output of a channel. The conjecture is proven true for all Renyi entropies of integer order greater than two in a class of Gaussian bosonic channel where the input signal is randomly displaced or where it is coupled linearly to an external environment.
Reducing CO2 flux by decreasing tillage in Ohio: overcoming conjecture with data
USDA-ARS?s Scientific Manuscript database
Soil could become an important sink for atmospheric carbon dioxide (CO2) as global agricultural greenhouse gas emissions continue to grow, but data to support this conjecture are few. Sequestering soil carbon (C) depends upon many factors including soil type, climate, crop, tillage, nitrogen fertili...
Proof of Nishida's Conjecture on Anharmonic Lattices
NASA Astrophysics Data System (ADS)
Rink, Bob
2006-02-01
We prove Nishida's 1971 conjecture stating that almost all low-energetic motions of the anharmonic Fermi-Pasta-Ulam lattice with fixed endpoints are quasi-periodic. The proof is based on the formal computations of Nishida, the KAM theorem, discrete symmetry considerations and an algebraic trick that considerably simplifies earlier results.
NASA Astrophysics Data System (ADS)
Bakoban, Rana A.
2017-08-01
The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.
Some constructions of biharmonic maps and Chen’s conjecture on biharmonic hypersurfaces
NASA Astrophysics Data System (ADS)
Ou, Ye-Lin
2012-04-01
We give several construction methods and use them to produce many examples of proper biharmonic maps including biharmonic tori of any dimension in Euclidean spheres (Theorem 2.2, Corollaries 2.3, 2.4 and 2.6), biharmonic maps between spheres (Theorem 2.9) and into spheres (Theorem 2.10) via orthogonal multiplications and eigenmaps. We also study biharmonic graphs of maps, derive the equation for a function whose graph is a biharmonic hypersurface in a Euclidean space, and give an equivalent formulation of Chen's conjecture on biharmonic hypersurfaces by using the biharmonic graph equation (Theorem 4.1) which paves a way for the analytic study of the conjecture.
NASA Technical Reports Server (NTRS)
Payne, M. H.
1973-01-01
The bounds for the normalized associated Legendre functions P sub nm were studied to provide a rational basis for the truncation of the geopotential series in spherical harmonics in various orbital analyses. The conjecture is made that the largest maximum of the normalized associated Legendre function lies in the interval which indicates the greatest integer function. A procedure is developed for verifying this conjecture. An on-line algebraic manipulator, IAM, is used to implement the procedure and the verification is carried out for all n equal to or less than 2m, for m = 1 through 6. A rigorous proof of the conjecture is not available.
Dimension improvement in Dhar's refutation of the Eden conjecture
NASA Astrophysics Data System (ADS)
Bertrand, Quentin; Pertinand, Jules
2018-03-01
We consider the Eden model on the d-dimensional hypercubical unoriented lattice, for large d. Initially, every lattice point is healthy, except the origin which is infected. Then, each infected lattice point contaminates any of its neighbours with rate 1. The Eden model is equivalent to first passage percolation, with exponential passage times on edges. The Eden conjecture states that the limit shape of the Eden model is a Euclidean ball. By pushing the computations of Dhar [5] a little further with modern computers and efficient implementation we obtain improved bounds for the speed of infection. This shows that the Eden conjecture does not hold in dimension superior to 22 (the lowest known dimension was 35).
Forming conjectures within a spreadsheet environment
NASA Astrophysics Data System (ADS)
Calder, Nigel; Brown, Tony; Hanley, Una; Darby, Susan
2006-12-01
This paper is concerned with the use of spreadsheets within mathematical investigational tasks. Considering the learning of both children and pre-service teaching students, it examines how mathematical phenomena can be seen as a function of the pedagogical media through which they are encountered. In particular, it shows how pedagogical apparatus influence patterns of social interaction, and how this interaction shapes the mathematical ideas that are engaged with. Notions of conjecture, along with the particular faculty of the spreadsheet setting, are considered with regard to the facilitation of mathematical thinking. Employing an interpretive perspective, a key focus is on how alternative pedagogical media and associated discursive networks influence the way that students form and test informal conjectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedland, G.
2018-01-08
This note confirms Goldbach’s Conjecture from 1742. This is, every even integer greater than two is the sum of two prime numbers. An analysis of the nature of multiplication as description length reduction for addition precedes a contraposition that it is impossible to subtract any prime from a given even integer without the result ever being prime.
Proof of a new colour decomposition for QCD amplitudes
Melia, Tom
2015-12-16
Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.
Weak gravity conjecture as a razor criterium for exotic D-brane instantons
NASA Astrophysics Data System (ADS)
Addazi, Andrea
2017-01-01
We discuss implications of weak gravity conjecture (WGC) for exotic D-brane instantons. In particular, WGC leads to indirect stringent bounds on non-perturbative superpotentials generated by exotic instantons with many implications for phenomenology: R-parity violating processes, neutrino mass, μ-problem, neutron-antineutron transitions and collider physics.
Developing a "Conjecturing Atmosphere" in the Classroom through Task Design and Enactment
ERIC Educational Resources Information Center
Hunter, Jodie
2014-01-01
In recent years there has been an increased emphasis on algebraic reasoning in primary school classrooms. This includes introducing students to the mathematical practices of making conjectures, justifying and generalising. Drawing on findings from a classroom-based study, this paper explores one teacher's journey in shifting her task design and…
Three Conjectures about School Effectiveness: An Exploratory Study
ERIC Educational Resources Information Center
Hofman, Roelande H.; Hofman, W. H. Adriaan; Gray, John M.
2015-01-01
In this article, we address three broad conjectures about what really matters with respect to school effectiveness. Our review of previous evidence prompted us to look at three sets of factors connected with classroom teachers, school policies and processes, and matters of governance. All three have featured prominently in the public arena. In…
Proof of a new colour decomposition for QCD amplitudes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melia, Tom
Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.
Fostering Teacher Learning of Conjecturing, Generalising and Justifying through Mathematics Studio
ERIC Educational Resources Information Center
Lesseig, Kristin
2016-01-01
Calls to advance students' ability to engage in mathematical reasoning practices including conjecturing, generalising and justifying (CGJ) place significant new demands on teachers. This case study examines how Mathematics Studio provided opportunities for a team of U.S. middle school teachers to learn about these practices and ways to promote…
Informed Conjecturing of Solutions for Differential Equations in a Modeling Context
ERIC Educational Resources Information Center
Winkel, Brian
2015-01-01
We examine two differential equations. (i) first-order exponential growth or decay; and (ii) second order, linear, constant coefficient differential equations, and show the advantage of learning differential equations in a modeling context for informed conjectures of their solution. We follow with a discussion of the complete analysis afforded by…
Learning Mathematics Does Not (Necessarily) Mean Constructing the Right Knowledge
ERIC Educational Resources Information Center
Dawson, Sandy
2015-01-01
In this article, which was first published in 1991, the late Sandy Dawson, discusses aspects of a Lakatosian approach to mathematics teaching. The ideas are illustrated with examples from three teaching situations: making conjectures about the next number in a sequence; making conjectures about the internal angles in a triangle using Logo; and…
Calabi's conjecture and some new results in algebraic geometry
Yau, Shing-Tung
1977-01-01
We announce a proof of Calabi's conjectures on the Ricci curvature of a compact Kähler manifold and then apply it to prove some new results in algebraic geometry and differential geometry. For example, we prove that the only Kähler structure on a complex projective space is the standard one. PMID:16592394
Constructing exact symmetric informationally complete measurements from numerical solutions
NASA Astrophysics Data System (ADS)
Appleby, Marcus; Chien, Tuan-Yow; Flammia, Steven; Waldron, Shayne
2018-04-01
Recently, several intriguing conjectures have been proposed connecting symmetric informationally complete quantum measurements (SIC POVMs, or SICs) and algebraic number theory. These conjectures relate the SICs to their minimal defining algebraic number field. Testing or sharpening these conjectures requires that the SICs are expressed exactly, rather than as numerical approximations. While many exact solutions of SICs have been constructed previously using Gröbner bases, this method has probably been taken as far as is possible with current computer technology (except in special cases where there are additional symmetries). Here, we describe a method for converting high-precision numerical solutions into exact ones using an integer relation algorithm in conjunction with the Galois symmetries of an SIC. Using this method, we have calculated 69 new exact solutions, including nine new dimensions, where previously only numerical solutions were known—which more than triples the number of known exact solutions. In some cases, the solutions require number fields with degrees as high as 12 288. We use these solutions to confirm that they obey the number-theoretic conjectures, and address two questions suggested by the previous work.
On global solutions of the random Hamilton-Jacobi equations and the KPZ problem
NASA Astrophysics Data System (ADS)
Bakhtin, Yuri; Khanin, Konstantin
2018-04-01
In this paper, we discuss possible qualitative approaches to the problem of KPZ universality. Throughout the paper, our point of view is based on the geometrical and dynamical properties of minimisers and shocks forming interlacing tree-like structures. We believe that the KPZ universality can be explained in terms of statistics of these structures evolving in time. The paper is focussed on the setting of the random Hamilton-Jacobi equations. We formulate several conjectures concerning global solutions and discuss how their properties are connected to the KPZ scalings in dimension 1 + 1. In the case of general viscous Hamilton-Jacobi equations with non-quadratic Hamiltonians, we define generalised directed polymers. We expect that their behaviour is similar to the behaviour of classical directed polymers, and present arguments in favour of this conjecture. We also define a new renormalisation transformation defined in purely geometrical terms and discuss conjectural properties of the corresponding fixed points. Most of our conjectures are widely open, and supported by only partial rigorous results for particular models.
A proof of Wright's conjecture
NASA Astrophysics Data System (ADS)
van den Berg, Jan Bouwe; Jaquette, Jonathan
2018-06-01
Wright's conjecture states that the origin is the global attractor for the delay differential equation y‧ (t) = - αy (t - 1) [ 1 + y (t) ] for all α ∈ (0, π/2 ] when y (t) > - 1. This has been proven to be true for a subset of parameter values α. We extend the result to the full parameter range α ∈ (0, π/2 ], and thus prove Wright's conjecture to be true. Our approach relies on a careful investigation of the neighborhood of the Hopf bifurcation occurring at α = π/2. This analysis fills the gap left by complementary work on Wright's conjecture, which covers parameter values further away from the bifurcation point. Furthermore, we show that the branch of (slowly oscillating) periodic orbits originating from this Hopf bifurcation does not have any subsequent bifurcations (and in particular no folds) for α ∈ (π/2, π/2 + 6.830 ×10-3 ]. When combined with other results, this proves that the branch of slowly oscillating solutions that originates from the Hopf bifurcation at α = π/2 is globally parametrized by α > π/2.
Wang, Xiaojing; Chen, Ming-Hui; Yan, Jun
2013-07-01
Cox models with time-varying coefficients offer great flexibility in capturing the temporal dynamics of covariate effects on event times, which could be hidden from a Cox proportional hazards model. Methodology development for varying coefficient Cox models, however, has been largely limited to right censored data; only limited work on interval censored data has been done. In most existing methods for varying coefficient models, analysts need to specify which covariate coefficients are time-varying and which are not at the time of fitting. We propose a dynamic Cox regression model for interval censored data in a Bayesian framework, where the coefficient curves are piecewise constant but the number of pieces and the jump points are covariate specific and estimated from the data. The model automatically determines the extent to which the temporal dynamics is needed for each covariate, resulting in smoother and more stable curve estimates. The posterior computation is carried out via an efficient reversible jump Markov chain Monte Carlo algorithm. Inference of each coefficient is based on an average of models with different number of pieces and jump points. A simulation study with three covariates, each with a coefficient of different degree in temporal dynamics, confirmed that the dynamic model is preferred to the existing time-varying model in terms of model comparison criteria through conditional predictive ordinate. When applied to a dental health data of children with age between 7 and 12 years, the dynamic model reveals that the relative risk of emergence of permanent tooth 24 between children with and without an infected primary predecessor is the highest at around age 7.5, and that it gradually reduces to one after age 11. These findings were not seen from the existing studies with Cox proportional hazards models.
Statistical inference methods for two crossing survival curves: a comparison of methods.
Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng
2015-01-01
A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.
Statistical Inference Methods for Two Crossing Survival Curves: A Comparison of Methods
Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng
2015-01-01
A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman’s smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér—von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman’s smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests. PMID:25615624
Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin
2015-11-01
In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Grantz, Erin; Haggard, Brian; Scott, J Thad
2018-06-12
We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.
Landy, Rebecca; Cheung, Li C; Schiffman, Mark; Gage, Julia C; Hyun, Noorie; Wentzensen, Nicolas; Kinney, Walter K; Castle, Philip E; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Sasieni, Peter D; Katki, Hormuzd A
2018-06-01
Electronic health-records (EHR) are increasingly used by epidemiologists studying disease following surveillance testing to provide evidence for screening intervals and referral guidelines. Although cost-effective, undiagnosed prevalent disease and interval censoring (in which asymptomatic disease is only observed at the time of testing) raise substantial analytic issues when estimating risk that cannot be addressed using Kaplan-Meier methods. Based on our experience analysing EHR from cervical cancer screening, we previously proposed the logistic-Weibull model to address these issues. Here we demonstrate how the choice of statistical method can impact risk estimates. We use observed data on 41,067 women in the cervical cancer screening program at Kaiser Permanente Northern California, 2003-2013, as well as simulations to evaluate the ability of different methods (Kaplan-Meier, Turnbull, Weibull and logistic-Weibull) to accurately estimate risk within a screening program. Cumulative risk estimates from the statistical methods varied considerably, with the largest differences occurring for prevalent disease risk when baseline disease ascertainment was random but incomplete. Kaplan-Meier underestimated risk at earlier times and overestimated risk at later times in the presence of interval censoring or undiagnosed prevalent disease. Turnbull performed well, though was inefficient and not smooth. The logistic-Weibull model performed well, except when event times didn't follow a Weibull distribution. We have demonstrated that methods for right-censored data, such as Kaplan-Meier, result in biased estimates of disease risks when applied to interval-censored data, such as screening programs using EHR data. The logistic-Weibull model is attractive, but the model fit must be checked against Turnbull non-parametric risk estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Modeling stream fish distributions using interval-censored detection times.
Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro
2016-08-01
Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.
Sammon, Cormac J; Petersen, Irene
2016-04-01
Studies using primary care databases often censor follow-up at the date data are last collected from clinical computer systems (last collection date (LCD)). We explored whether this results in the selective exclusion of events entered in the electronic health records after their date of occurrence, that is, backdated events. We used data from The Health Improvement Network (THIN). Using two versions of the database, we identified events that were entered into a later (THIN14) but not an earlier version of the database (THIN13) and investigated how the number of entries changed as a function of time since LCD. Times between events and the dates they were recorded were plotted as a function of time since the LCD in an effort to determine appropriate points at which to censor follow-up. There were 356 million eligible events in THIN14 and 355 million eligible events in THIN13. When comparing the two data sets, the proportion of missing events in THIN13 was highest in the month prior to the LCD (9.6%), decreasing to 5.2% at 6 months and 3.4% at 12 months. The proportion of missing events was largest for events typically diagnosed in secondary care such as neoplasms (28% in the month prior to LCD) and negligible for events typically diagnosed in primary care such as respiratory events (2% in the month prior to LCD). Studies using primary care databases, particularly those investigating events typically diagnosed outside primary care, should censor follow-up prior to the LCD to avoid underestimation of event rates. Copyright © 2016 John Wiley & Sons, Ltd.
Accounting for Selection Bias in Studies of Acute Cardiac Events.
Banack, Hailey R; Harper, Sam; Kaufman, Jay S
2018-06-01
In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
Local Conjecturing Process in the Solving of Pattern Generalization Problem
ERIC Educational Resources Information Center
Sutarto; Nusantara, Toto; Subanji; Sisworo
2016-01-01
This aim of this study is to describe the process of local conjecturing in generalizing patterns based on Action, Process, Object, Schema (APOS) theory. The subjects were 16 grade 8 students from a junior high school. Data collection used Pattern Generalization Problem (PGP) and interviews. In the first stage, students completed PGP; in the second…
Random function theory revisited - Exact solutions versus the first order smoothing conjecture
NASA Technical Reports Server (NTRS)
Lerche, I.; Parker, E. N.
1975-01-01
We remark again that the mathematical conjecture known as first order smoothing or the quasi-linear approximation does not give the correct dependence on correlation length (time) in many cases, although it gives the correct limit as the correlation length (time) goes to zero. In this sense, then, the method is unreliable.
Understanding the Nature of Science and Scientific Progress: A Theory-Building Approach
ERIC Educational Resources Information Center
Chuy, Maria; Scardamalia, Marlene; Bereiter, Carl; Prinsen, Fleur; Resendes, Monica; Messina, Richard; Hunsburger, Winifred; Teplovs, Chris; Chow, Angela
2010-01-01
In 1993 Carey and Smith conjectured that the most promising way to boost students' understanding of the nature of science is a "theory-building approach to teaching about inquiry." The research reported here tested this conjecture by comparing results from two Grade 4 classrooms that differed in their emphasis on and technological…
On the Nature of Mathematical Thought and Inquiry: A Prelusive Suggestion
ERIC Educational Resources Information Center
McLoughlin, M. Padraig M. M.
2004-01-01
The author of this paper submits that humans have a natural inquisitiveness; hence, mathematicians (as well as other humans) must be active in learning. Thus, we must commit to conjecture and prove or disprove said conjecture. Ergo, the purpose of the paper is to submit the thesis that learning requires doing; only through inquiry is learning…
Conjecturing, Generalizing and Justifying: Building Theory around Teacher Knowledge of Proving
ERIC Educational Resources Information Center
Lesseig, Kristin
2016-01-01
The purpose of this study was to detail teachers' proving activity and contribute to a framework of Mathematical Knowledge for Teaching Proof (MKT for Proof). While working to justify claims about sums of consecutive numbers, teachers searched for key ideas and productively used examples to make, test and refine conjectures. Analysis of teachers'…
NASA Astrophysics Data System (ADS)
Jensen, Kristan
2018-01-01
We conjecture a new sequence of dualities between Chern-Simons gauge theories simultaneously coupled to fundamental bosons and fermions. These dualities reduce to those proposed by Aharony when the number of bosons or fermions is zero. Our conjecture passes a number of consistency checks. These include the matching of global symmetries and consistency with level/rank duality in massive phases.
Yang-Mills theory and the ABC conjecture
NASA Astrophysics Data System (ADS)
He, Yang-Hui; Hu, Zhi; Probst, Malte; Read, James
2018-05-01
We establish a precise correspondence between the ABC Conjecture and 𝒩 = 4 super-Yang-Mills theory. This is achieved by combining three ingredients: (i) Elkies’ method of mapping ABC-triples to elliptic curves in his demonstration that ABC implies Mordell/Faltings; (ii) an explicit pair of elliptic curve and associated Belyi map given by Khadjavi-Scharaschkin; and (iii) the fact that the bipartite brane-tiling/dimer model for a gauge theory with toric moduli space is a particular dessin d’enfant in the sense of Grothendieck. We explore this correspondence for the highest quality ABC-triples as well as large samples of random triples. The conjecture itself is mapped to a statement about the fundamental domain of the toroidal compactification of the string realization of 𝒩 = 4 SYM.
Markov chains and semi-Markov models in time-to-event analysis.
Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J
2013-10-25
A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.
Markov chains and semi-Markov models in time-to-event analysis
Abner, Erin L.; Charnigo, Richard J.; Kryscio, Richard J.
2014-01-01
A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields. PMID:24818062
Driscoll, Jonathan D.; Shih, Andy Y.; Iyengar, Satish; Field, Jeffrey J.; White, G. Allen; Squier, Jeffrey A.; Cauwenberghs, Gert
2011-01-01
We present a high-speed photon counter for use with two-photon microscopy. Counting pulses of photocurrent, as opposed to analog integration, maximizes the signal-to-noise ratio so long as the uncertainty in the count does not exceed the gain-noise of the photodetector. Our system extends this improvement through an estimate of the count that corrects for the censored period after detection of an emission event. The same system can be rapidly reconfigured in software for fluorescence lifetime imaging, which we illustrate by distinguishing between two spectrally similar fluorophores in an in vivo model of microstroke. PMID:21471395
Multiple imputation for cure rate quantile regression with censored data.
Wu, Yuanshan; Yin, Guosheng
2017-03-01
The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.
Multivariate-$t$ nonlinear mixed models with application to censored multi-outcome AIDS studies.
Lin, Tsung-I; Wang, Wan-Lun
2017-10-01
In multivariate longitudinal HIV/AIDS studies, multi-outcome repeated measures on each patient over time may contain outliers, and the viral loads are often subject to a upper or lower limit of detection depending on the quantification assays. In this article, we consider an extension of the multivariate nonlinear mixed-effects model by adopting a joint multivariate-$t$ distribution for random effects and within-subject errors and taking the censoring information of multiple responses into account. The proposed model is called the multivariate-$t$ nonlinear mixed-effects model with censored responses (MtNLMMC), allowing for analyzing multi-outcome longitudinal data exhibiting nonlinear growth patterns with censorship and fat-tailed behavior. Utilizing the Taylor-series linearization method, a pseudo-data version of expectation conditional maximization either (ECME) algorithm is developed for iteratively carrying out maximum likelihood estimation. We illustrate our techniques with two data examples from HIV/AIDS studies. Experimental results signify that the MtNLMMC performs favorably compared to its Gaussian analogue and some existing approaches. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Thermal Cycling Life Prediction of Sn-3.0Ag-0.5Cu Solder Joint Using Type-I Censored Data
Mi, Jinhua; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-01-01
Because solder joint interconnections are the weaknesses of microelectronic packaging, their reliability has great influence on the reliability of the entire packaging structure. Based on an accelerated life test the reliability assessment and life prediction of lead-free solder joints using Weibull distribution are investigated. The type-I interval censored lifetime data were collected from a thermal cycling test, which was implemented on microelectronic packaging with lead-free ball grid array (BGA) and fine-pitch ball grid array (FBGA) interconnection structures. The number of cycles to failure of lead-free solder joints is predicted by using a modified Engelmaier fatigue life model and a type-I censored data processing method. Then, the Pan model is employed to calculate the acceleration factor of this test. A comparison of life predictions between the proposed method and the ones calculated directly by Matlab and Minitab is conducted to demonstrate the practicability and effectiveness of the proposed method. At last, failure analysis and microstructure evolution of lead-free solders are carried out to provide useful guidance for the regular maintenance, replacement of substructure, and subsequent processing of electronic products. PMID:25121138
Stawarczyk, Bogna; Ozcan, Mutlu; Hämmerle, Christoph H F; Roos, Malgorzata
2012-05-01
The aim of this study was to compare the fracture load of veneered anterior zirconia crowns using normal and Weibull distribution of complete and censored data. Standardized zirconia frameworks for maxillary canines were milled using a CAD/CAM system and randomly divided into 3 groups (N=90, n=30 per group). They were veneered with three veneering ceramics, namely GC Initial ZR, Vita VM9, IPS e.max Ceram using layering technique. The crowns were cemented with glass ionomer cement on metal abutments. The specimens were then loaded to fracture (1 mm/min) in a Universal Testing Machine. The data were analyzed using classical method (normal data distribution (μ, σ); Levene test and one-way ANOVA) and according to the Weibull statistics (s, m). In addition, fracture load results were analyzed depending on complete and censored failure types (only chipping vs. total fracture together with chipping). When computed with complete data, significantly higher mean fracture loads (N) were observed for GC Initial ZR (μ=978, σ=157; s=1043, m=7.2) and VITA VM9 (μ=1074, σ=179; s=1139; m=7.8) than that of IPS e.max Ceram (μ=798, σ=174; s=859, m=5.8) (p<0.05) by classical and Weibull statistics, respectively. When the data were censored for only total fracture, IPS e.max Ceram presented the lowest fracture load for chipping with both classical distribution (μ=790, σ=160) and Weibull statistics (s=836, m=6.5). When total fracture with chipping (classical distribution) was considered as failure, IPS e.max Ceram did not show significant fracture load for total fracture (μ=1054, σ=110) compared to other groups (GC Initial ZR: μ=1039, σ=152, VITA VM9: μ=1170, σ=166). According to Weibull distributed data, VITA VM9 showed significantly higher fracture load (s=1228, m=9.4) than those of other groups. Both classical distribution and Weibull statistics for complete data yielded similar outcomes. Censored data analysis of all ceramic systems based on failure types is essential and brings additional information regarding the susceptibility to chipping or total fracture. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
The unexpectedly large dust and gas content of quiescent galaxies at z > 1.4
NASA Astrophysics Data System (ADS)
Gobat, R.; Daddi, E.; Magdis, G.; Bournaud, F.; Sargent, M.; Martig, M.; Jin, S.; Finoguenov, A.; Béthermin, M.; Hwang, H. S.; Renzini, A.; Wilson, G. W.; Aretxaga, I.; Yun, M.; Strazzullo, V.; Valentino, F.
2018-03-01
Early-type galaxies (ETGs) contain most of the stars present in the local Universe and, above a stellar mass content of 5 × 1010 solar masses, vastly outnumber spiral galaxies such as the Milky Way. These massive spheroidal galaxies have, in the present day, very little gas or dust in proportion to their mass1, and their stellar populations have been evolving passively for over 10 billion years. The physical mechanisms that led to the termination of star formation in these galaxies and depletion of their interstellar medium remain largely conjectural. In particular, there are currently no direct measurements of the amount of residual gas that might still be present in newly quiescent spheroidals at high redshift2. Here we show that quiescent ETGs at redshift z 1.8, close to their epoch of quenching, contained at least two orders of magnitude more dust at a fixed stellar mass compared with local ETGs. This implies the presence of substantial amounts of gas (5-10%), which has been consumed less efficiently than in more active galaxies, probably due to their spheroidal morphology, consistent with our simulations. This lower star formation efficiency, combined with an extended hot gas halo possibly maintained by persistent feedback from an active galactic nucleus, keep ETGs mostly passive throughout cosmic time.
Concordance cosmology without dark energy
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Dobos, László; Beck, Róbert; Szapudi, István; Csabai, István
2017-07-01
According to the separate universe conjecture, spherically symmetric sub-regions in an isotropic universe behave like mini-universes with their own cosmological parameters. This is an excellent approximation in both Newtonian and general relativistic theories. We estimate local expansion rates for a large number of such regions, and use a scale parameter calculated from the volume-averaged increments of local scale parameters at each time step in an otherwise standard cosmological N-body simulation. The particle mass, corresponding to a coarse graining scale, is an adjustable parameter. This mean field approximation neglects tidal forces and boundary effects, but it is the first step towards a non-perturbative statistical estimation of the effect of non-linear evolution of structure on the expansion rate. Using our algorithm, a simulation with an initial Ωm = 1 Einstein-de Sitter setting closely tracks the expansion and structure growth history of the Λ cold dark matter (ΛCDM) cosmology. Due to small but characteristic differences, our model can be distinguished from the ΛCDM model by future precision observations. Moreover, our model can resolve the emerging tension between local Hubble constant measurements and the Planck best-fitting cosmology. Further improvements to the simulation are necessary to investigate light propagation and confirm full consistency with cosmic microwave background observations.
ERIC Educational Resources Information Center
Norton, Anderson
2008-01-01
This article reports on students' learning through conjecturing, by drawing on a semester-long teaching experiment with 6 sixth-grade students. It focuses on 1 of the students, Josh, who developed especially powerful ways of operating over the course of the teaching experiment. Through a fine-grained analysis of Josh's actions, this article…
Some recent progress in classical general relativity
NASA Astrophysics Data System (ADS)
Finster, Felix; Smoller, Joel; Yau, Shing-Tung
2000-06-01
In this short survey paper, we shall discuss certain recent results in classical gravity. Our main attention will be restricted to two topics in which we have been involved; the positive mass conjecture and its extensions to the case with horizons, including the Penrose conjecture (Part I), and the interaction of gravity with other force fields and quantum-mechanical particles (Part II).
ERIC Educational Resources Information Center
O'Dell, Jenna R.
2017-01-01
The goal of this study was to document the characteristics of students' dispositions towards mathematics when they engaged in the exploration of parts of unsolved problems: Graceful Tree Conjecture and Collatz Conjecture. Ten students, Grades 4 and 5, from an after-school program in the Midwest participated in the study. I focused on the…
Inquiry Based Learning: A Modified Moore Method Approach To Encourage Student Research
ERIC Educational Resources Information Center
McLoughlin, M. Padraig M. M.
2008-01-01
The author of this paper submits that a mathematics student needs to learn to conjecture and prove or disprove said conjecture. Ergo, the purpose of the paper is to submit the thesis that learning requires doing; only through inquiry is learning achieved, and hence this paper proposes a programme of use of a modified Moore method (MMM) across the…
Strong monogamy conjecture for multiqubit entanglement: the four-qubit case.
Regula, Bartosz; Di Martino, Sara; Lee, Soojoon; Adesso, Gerardo
2014-09-12
We investigate the distribution of bipartite and multipartite entanglement in multiqubit states. In particular, we define a set of monogamy inequalities sharpening the conventional Coffman-Kundu-Wootters constraints, and we provide analytical proofs of their validity for relevant classes of states. We present extensive numerical evidence validating the conjectured strong monogamy inequalities for arbitrary pure states of four qubits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guha, Saikat; Shapiro, Jeffrey H.; Erkmen, Baris I.
Previous work on the classical information capacities of bosonic channels has established the capacity of the single-user pure-loss channel, bounded the capacity of the single-user thermal-noise channel, and bounded the capacity region of the multiple-access channel. The latter is a multiple-user scenario in which several transmitters seek to simultaneously and independently communicate to a single receiver. We study the capacity region of the bosonic broadcast channel, in which a single transmitter seeks to simultaneously and independently communicate to two different receivers. It is known that the tightest available lower bound on the capacity of the single-user thermal-noise channel is thatmore » channel's capacity if, as conjectured, the minimum von Neumann entropy at the output of a bosonic channel with additive thermal noise occurs for coherent-state inputs. Evidence in support of this minimum output entropy conjecture has been accumulated, but a rigorous proof has not been obtained. We propose a minimum output entropy conjecture that, if proved to be correct, will establish that the capacity region of the bosonic broadcast channel equals the inner bound achieved using a coherent-state encoding and optimum detection. We provide some evidence that supports this conjecture, but again a full proof is not available.« less
Koltun, G.F.; Holtschlag, David J.
2010-01-01
Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971–2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages).Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size and composition of the streamflow-gaging network affected the average apparent errors and variability of the estimated flows and (b) whether results for certain months were more variable than for others. The six flow lines were categorized into one of three types depending upon their network topology and position relative to operating streamflow-gaging stations. Statistical analysis of the model results indicates that (1) less precise (that is, more variable) estimates resulted from smaller streamflow-gaging networks as compared to larger streamflow-gaging networks, (2) precision of AFINCH flow estimates at an ungaged flow line is improved by operation of one or more streamflow gages upstream and (or) downstream in the enclosing basin, (3) no consistent seasonal trend in estimate variability was evident, and (4) flow lines from ungaged basins appeared to exhibit the smallest absolute apparent percent errors (APEs) and smallest changes in average APE as a function of increasing censoring level. The counterintuitive results described in item (4) above likely reflect both the nature of the base-streamflow estimate from which the errors were computed and insensitivity in the average model-derived estimates to changes in the streamflow-gaging-network size and composition. Another analysis demonstrated that errors for flow lines in ungaged basins have the potential to be much larger than indicated by their APEs if measured relative to their true (but unknown) flows. “Missing gage” analyses, based on examination of censoring subset results where the streamflow gage of interest was omitted from the calibration data set, were done to better understand the true error characteristics for ungaged flow lines as a function of network size. Results examined for 2 water years indicated that the probability of computing a monthly streamflow estimate within 10 percent of the true value with AFINCH decreased from greater than 0.9 at about a 10-percent network-censoring level to less than 0.6 as the censoring level approached 75 percent. In addition, estimates for typically dry months tended to be characterized by larger percent errors than typically wetter months.
Rejoice in the hubris: useful things biologists could do for physicists
NASA Astrophysics Data System (ADS)
Austin, Robert H.
2014-10-01
Political correctness urges us to state how wonderful it is to work with biologists and how, just as the lion will someday lie down with the lamb, so will interdisciplinary work, where biologists and physicists are mixed together in light, airy buildings designed to force socialization, give rise to wonderful new science. But it has been said that the only drive in human nature stronger than the sex drive is the drive to censor and suppress, and so I claim that it is OK for physicists and biologists to maintain a wary distance from each other, so that neither one censors or suppresses the wild ideas of the other.
Survival Data and Regression Models
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.
Statistical methods for astronomical data with upper limits. I - Univariate distributions
NASA Technical Reports Server (NTRS)
Feigelson, E. D.; Nelson, P. I.
1985-01-01
The statistical treatment of univariate censored data is discussed. A heuristic derivation of the Kaplan-Meier maximum-likelihood estimator from first principles is presented which results in an expression amenable to analytic error analysis. Methods for comparing two or more censored samples are given along with simple computational examples, stressing the fact that most astronomical problems involve upper limits while the standard mathematical methods require lower limits. The application of univariate survival analysis to six data sets in the recent astrophysical literature is described, and various aspects of the use of survival analysis in astronomy, such as the limitations of various two-sample tests and the role of parametric modelling, are discussed.
Rejoice in the hubris: useful things biologists could do for physicists.
Austin, Robert H
2014-10-08
Political correctness urges us to state how wonderful it is to work with biologists and how, just as the lion will someday lie down with the lamb, so will interdisciplinary work, where biologists and physicists are mixed together in light, airy buildings designed to force socialization, give rise to wonderful new science. But it has been said that the only drive in human nature stronger than the sex drive is the drive to censor and suppress, and so I claim that it is OK for physicists and biologists to maintain a wary distance from each other, so that neither one censors or suppresses the wild ideas of the other.
NASA Astrophysics Data System (ADS)
Grassl, Markus; Scott, Andrew J.
2017-12-01
We present a conjectured family of symmetric informationally complete positive operator valued measures which have an additional symmetry group whose size is growing with the dimension. The symmetry group is related to Fibonacci numbers, while the dimension is related to Lucas numbers. The conjecture is supported by exact solutions for dimensions d = 4, 8, 19, 48, 124, and 323 as well as a numerical solution for dimension d = 844.
BPS States, Torus Links and Wild Character Varieties
NASA Astrophysics Data System (ADS)
Diaconescu, Duiliu-Emanuel; Donagi, Ron; Pantev, Tony
2018-02-01
A string theoretic framework is constructed relating the cohomology of wild character varieties to refined stable pair theory and torus link invariants. Explicit conjectural formulas are derived for wild character varieties with a unique irregular point on the projective line. For this case, this leads to a conjectural colored generalization of existing results of Hausel, Mereb and Wong as well as Shende, Treumann and Zaslow.
The Relativistic Geometry and Dynamics of Electrons
NASA Astrophysics Data System (ADS)
Atiyah, M. F.; Malkoun, J.
2018-02-01
Atiyah and Sutcliffe (Proc R Soc Lond Ser A 458:1089-1115, 2002) made a number of conjectures about configurations of N distinct points in hyperbolic 3-space, arising from ideas of Berry and Robbins (Proc R Soc Lond Ser A 453:1771-1790, 1997). In this paper we prove all these conjectures, purely geometrically, but we also provide a physical interpretation in terms of Electrons.
Circumscribing Circumscription. A Guide to Relevance and Incompleteness,
1985-10-01
other rules of conjecture, to account for resource limitations. P "- h’ MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I. Memo...of conjecture, to account for resource limitations. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts...Institute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency
To Produce Conjectures and to Prove Them within a Dynamic Geometry Environment: A Case Study
ERIC Educational Resources Information Center
Furinghetti, Fulvia; Paola, Domingo
2003-01-01
This paper analyses a case study of a pair of students working together, who were asked to produce conjectures and to validate them within the dynamic geometry environment Cabri. Our aim is to scrutinize the students' reasoning, how the gap from perception to theory is filled, how Cabri influences the reasoning. We have singled out a sequence of…
Black hole remnants and the information loss paradox
NASA Astrophysics Data System (ADS)
Chen, P.; Ong, Y. C.; Yeom, D.-h.
2015-11-01
Forty years after the discovery of Hawking radiation, its exact nature remains elusive. If Hawking radiation does not carry any information out from the ever shrinking black hole, it seems that unitarity is violated once the black hole completely evaporates. On the other hand, attempts to recover information via quantum entanglement lead to the firewall controversy. Amid the confusions, the possibility that black hole evaporation stops with a "remnant" has remained unpopular and is often dismissed due to some "undesired properties" of such an object. Nevertheless, as in any scientific debate, the pros and cons of any proposal must be carefully scrutinized. We fill in the void of the literature by providing a timely review of various types of black hole remnants, and provide some new thoughts regarding the challenges that black hole remnants face in the context of the information loss paradox and its latest incarnation, namely the firewall controversy. The importance of understanding the role of curvature singularity is also emphasized, after all there remains a possibility that the singularity cannot be cured even by quantum gravity. In this context a black hole remnant conveniently serves as a cosmic censor. We conclude that a remnant remains a possible end state of Hawking evaporation, and if it contains large interior geometry, may help to ameliorate the information loss paradox and the firewall controversy. We hope that this will raise some interests in the community to investigate remnants more critically but also more thoroughly.
Self-force as a cosmic censor in the Kerr overspinning problem
NASA Astrophysics Data System (ADS)
Colleoni, Marta; Barack, Leor; Shah, Abhay G.; van de Meent, Maarten
2015-10-01
It is known that a near-extremal Kerr black hole can be spun up beyond its extremal limit by capturing a test particle. Here we show that overspinning is always averted once backreaction from the particle's own gravity is properly taken into account. We focus on nonspinning, uncharged, massive particles thrown in along the equatorial plane and work in the first-order self-force approximation (i.e., we include all relevant corrections to the particle's acceleration through linear order in the ratio, assumed small, between the particle's energy and the black hole's mass). Our calculation is a numerical implementation of a recent analysis by two of us [Phys. Rev. D 91, 104024 (2015)], in which a necessary and sufficient "censorship" condition was formulated for the capture scenario, involving certain self-force quantities calculated on the one-parameter family of unstable circular geodesics in the extremal limit. The self-force information accounts both for radiative losses and for the finite-mass correction to the critical value of the impact parameter. Here we obtain the required self-force data and present strong evidence to suggest that captured particles never drive the black hole beyond its extremal limit. We show, however, that, within our first-order self-force approximation, it is possible to reach the extremal limit with a suitable choice of initial orbital parameters. To rule out such a possibility would require (currently unavailable) information about higher-order self-force corrections.
Likelihoods for fixed rank nomination networks
HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE
2014-01-01
Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586
Nie, Hui; Cheng, Jing; Small, Dylan S
2011-12-01
In many clinical studies with a survival outcome, administrative censoring occurs when follow-up ends at a prespecified date and many subjects are still alive. An additional complication in some trials is that there is noncompliance with the assigned treatment. For this setting, we study the estimation of the causal effect of treatment on survival probability up to a given time point among those subjects who would comply with the assignment to both treatment and control. We first discuss the standard instrumental variable (IV) method for survival outcomes and parametric maximum likelihood methods, and then develop an efficient plug-in nonparametric empirical maximum likelihood estimation (PNEMLE) approach. The PNEMLE method does not make any assumptions on outcome distributions, and makes use of the mixture structure in the data to gain efficiency over the standard IV method. Theoretical results of the PNEMLE are derived and the method is illustrated by an analysis of data from a breast cancer screening trial. From our limited mortality analysis with administrative censoring times 10 years into the follow-up, we find a significant benefit of screening is present after 4 years (at the 5% level) and this persists at 10 years follow-up. © 2011, The International Biometric Society.
Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.
Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi
2013-12-01
Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
Quick, Harrison; Groth, Caroline; Banerjee, Sudipto; Carlin, Bradley P.; Stenzel, Mark R.; Stewart, Patricia A.; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.
2014-01-01
Summary This paper develops a hierarchical framework for identifying spatiotemporal patterns in data with a high degree of censoring using the gradient process. To do this, we impute censored values using a sampling-based inverse CDF method within our Markov chain Monte Carlo algorithm, thereby avoiding burdensome integration and facilitating efficient estimation of other model parameters. We illustrate use of our methodology using a simulated data example, and uncover the danger of simply substituting a space- and time-constant function of the level of detection for all missing values. We then fit our model to area measurement data of volatile organic compounds (VOC) air concentrations collected on vessels supporting the response and clean-up efforts of the Deepwater Horizon oil release that occurred starting April 20, 2010. These data contained a high percentage of observations below the detectable limits of the measuring instrument. Despite this, we were still able to make some interesting discoveries, including elevated levels of VOC near the site of the oil well on June 26th. Using the results from this preliminary analysis, we hope to inform future research on the Deepwater Horizon study, including the use of gradient methods for assigning workers to exposure categories. PMID:25599019
The Statistical Value of Raw Fluorescence Signal in Luminex xMAP Based Multiplex Immunoassays
Breen, Edmond J.; Tan, Woei; Khan, Alamgir
2016-01-01
Tissue samples (plasma, saliva, serum or urine) from 169 patients classified as either normal or having one of seven possible diseases are analysed across three 96-well plates for the presences of 37 analytes using cytokine inflammation multiplexed immunoassay panels. Censoring for concentration data caused problems for analysis of the low abundant analytes. Using fluorescence analysis over concentration based analysis allowed analysis of these low abundant analytes. Mixed-effects analysis on the resulting fluorescence and concentration responses reveals a combination of censoring and mapping the fluorescence responses to concentration values, through a 5PL curve, changed observed analyte concentrations. Simulation verifies this, by showing a dependence on the mean florescence response and its distribution on the observed analyte concentration levels. Differences from normality, in the fluorescence responses, can lead to differences in concentration estimates and unreliable probabilities for treatment effects. It is seen that when fluorescence responses are normally distributed, probabilities of treatment effects for fluorescence based t-tests has greater statistical power than the same probabilities from concentration based t-tests. We add evidence that the fluorescence response, unlike concentration values, doesn’t require censoring and we show with respect to differential analysis on the fluorescence responses that background correction is not required. PMID:27243383
Wang, Wei; Griswold, Michael E
2016-11-30
The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Evaluation of High-Throughput Chemical Exposure Models ...
The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer parent chemical exposures from biomonitoring measurements and forward models to predict multi-pathway exposures from chemical use information and/or residential media concentrations. Here, both forward and reverse modeling methods are used to characterize the relationship between matched near-field environmental (air and dust) and biomarker measurements. Indoor air, house dust, and urine samples from a sample of 120 females (aged 60 to 80 years) were analyzed. In the measured data, 78% of the residential media measurements (across 80 chemicals) and 54% of the urine measurements (across 21 chemicals) were censored, i.e. below the limit of quantification (LOQ). Because of the degree of censoring, we applied a Bayesian approach to impute censored values for 69 chemicals having at least 15% of measurements above LOQ. This resulted in 10 chemicals (5 phthalates, 5 pesticides) with matched air, dust, and urine metabolite measurements. The population medians of indoor air and dust concentrations were compared to population median exposures inferred from urine metabolites concentrations using a high-throughput reverse-dosimetry approach. Median air and dust concentrations were found to be correl
Why did Einstein reject the November tensor in 1912-1913, only to come back to it in November 1915?
NASA Astrophysics Data System (ADS)
Weinstein, Galina
2018-05-01
The question of Einstein's rejection of the November tensor is re-examined in light of conflicting answers by several historians. I discuss these conflicting conjectures in view of three questions that should inform our thinking: Why did Einstein reject the November tensor in 1912, only to come back to it in 1915? Why was it hard for Einstein to recognize that the November tensor is a natural generalization of Newton's law of gravitation? Why did it take him three years to realize that the November tensor is not incompatible with Newton's law? I first briefly describe Einstein's work in the Zurich Notebook. I then discuss a number of interpretive conjectures formulated by historians and what may be inferred from them. Finally, I offer a new combined conjecture that answers the above questions.
Synchronous correlation matrices and Connes’ embedding conjecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykema, Kenneth J., E-mail: kdykema@math.tamu.edu; Paulsen, Vern, E-mail: vern@math.uh.edu
In the work of Paulsen et al. [J. Funct. Anal. (in press); preprint arXiv:1407.6918], the concept of synchronous quantum correlation matrices was introduced and these were shown to correspond to traces on certain C*-algebras. In particular, synchronous correlation matrices arose in their study of various versions of quantum chromatic numbers of graphs and other quantum versions of graph theoretic parameters. In this paper, we develop these ideas further, focusing on the relations between synchronous correlation matrices and microstates. We prove that Connes’ embedding conjecture is equivalent to the equality of two families of synchronous quantum correlation matrices. We prove thatmore » if Connes’ embedding conjecture has a positive answer, then the tracial rank and projective rank are equal for every graph. We then apply these results to more general non-local games.« less
Taber, David J; Gebregziabher, Mulugeta; Payne, Elizabeth H; Srinivas, Titte; Baliga, Prabhakar K; Egede, Leonard E
2017-02-01
Black kidney transplant recipients experience disproportionately high rates of graft loss. This disparity has persisted for 40 years, and improvements may be impeded based on the current public reporting of overall graft loss by US regulatory organizations for transplantation. Longitudinal cohort study of kidney transplant recipients using a data set created by linking Veterans Affairs and US Renal Data System information, including 4918 veterans transplanted between January 2001 and December 2007, with follow-up through December 2010. Multivariable analysis was conducted using 2-stage joint modeling of random and fixed effects of longitudinal data (linear mixed model) with time to event outcomes (Cox regression). Three thousand three hundred six non-Hispanic whites (67%) were compared with 1612 non-Hispanic black (33%) recipients with 6.0 ± 2.2 years of follow-up. In the unadjusted analysis, black recipients were significantly more likely to have overall graft loss (hazard ratio [HR], 1.19; 95% confidence interval [95% CI], 1.07-1.33), death-censored graft loss (HR, 1.67; 95% CI, 1.45-1.92), and lower mortality (HR, 0.83; 95% CI, 0.72-0.96). In fully adjusted models, only death-censored graft loss remained significant (HR, 1.38; 95% CI, 1.12-1.71; overall graft loss [HR, 1.08; 95% CI, 0.91-1.28]; mortality [HR, 0.84; 95% CI, 0.67-1.06]). A composite definition of graft loss reduced the magnitude of disparities in blacks by 22%. Non-Hispanic black kidney transplant recipients experience a substantial disparity in graft loss, but not mortality. This study of US data provides evidence to suggest that researchers should focus on using death-censored graft loss as the primary outcome of interest to facilitate a better understanding of racial disparities in kidney transplantation.
van Londen, Marco; Aarts, Brigitte M; Deetman, Petronella E; van der Weijden, Jessica; Eisenga, Michele F; Navis, Gerjan; Bakker, Stephan J L; de Borst, Martin H
2017-08-07
Hypophosphatemia is common in the first year after kidney transplantation, but its clinical implications are unclear. We investigated the relationship between the severity of post-transplant hypophosphatemia and mortality or death-censored graft failure in a large cohort of renal transplant recipients with long-term follow-up. We performed a longitudinal cohort study in 957 renal transplant recipients who were transplanted between 1993 and 2008 at a single center. We used a large real-life dataset containing 28,178 phosphate measurements (median of 27; first to third quartiles, 23-34) serial measurements per patient) and selected the lowest intraindividual phosphate level during the first year after transplantation. The primary outcomes were all-cause mortality, cardiovascular mortality, and death-censored graft failure. The median (interquartile range) intraindividual lowest phosphate level was 1.58 (1.30-1.95) mg/dl, and it was reached at 33 (21-51) days post-transplant. eGFR was the main correlate of the lowest serum phosphate level (model R 2 =0.32). During 9 (5-12) years of follow-up, 181 (19%) patients developed graft failure, and 295 (35%) patients died, of which 94 (32%) deaths were due to cardiovascular disease. In multivariable Cox regression analysis, more severe hypophosphatemia was associated with a lower risk of death-censored graft failure (fully adjusted hazard ratio, 0.61; 95% confidence interval, 0.43 to 0.88 per 1 mg/dl lower serum phosphate) and cardiovascular mortality (fully adjusted hazard ratio, 0.37; 95% confidence interval, 0.22 to 0.62) but not noncardiovascular mortality (fully adjusted hazard ratio, 1.33; 95% confidence interval, 0.9 to 1.96) or all-cause mortality (fully adjusted hazard ratio, 1.15; 95% confidence interval, 0.81 to 1.61). Post-transplant hypophosphatemia develops early after transplantation. These data connect post-transplant hypophosphatemia with favorable long-term graft and patient outcomes. Copyright © 2017 by the American Society of Nephrology.
Gillespie, Iain A; Floege, Jürgen; Gioni, Ioanna; Drüeke, Tilman B; de Francisco, Angel L; Anker, Stefan D; Kubo, Yumi; Wheeler, David C; Froissart, Marc
2015-07-01
The generalisability of randomised controlled trials (RCTs) may be limited by restrictive entry criteria or by their experimental nature. Observational research can provide complementary findings but is prone to bias. Employing propensity score matching, to reduce such bias, we compared the real-life effect of cinacalcet use on all-cause mortality (ACM) with findings from the Evaluation of Cinacalcet Therapy to Lower Cardiovascular Events (EVOLVE) RCT in chronic haemodialysis patients. Incident adult haemodialysis patients receiving cinacalcet, recruited in a prospective observational cohort from 2007-2009 (AROii; n = 10,488), were matched to non-exposed patients regardless of future exposure status. The effect of treatment crossover was investigated with inverse probability of censoring weighted and lag-censored analyses. EVOLVE ACM data were analysed largely as described for the primary composite endpoint. AROii patients receiving cinacalcet (n = 532) were matched to 1790 non-exposed patients. The treatment effect of cinacalcet on ACM in the main AROii analysis (hazard ratio 1.03 [95% confidence interval (CI) 0.78-1.35]) was closer to the null than for the Intention to Treat (ITT) analysis of EVOLVE (0.94 [95%CI 0.85-1.04]). Adjusting for non-persistence by 0- and 6-month lag-censoring and by inverse probability of censoring weight, the hazard ratios in AROii (0.76 [95%CI 0.51-1.15], 0.84 [95%CI 0.60-1.18] and 0.79 [95%CI 0.56-1.11], respectively) were comparable with those of EVOLVE (0.82 [95%CI 0.67-1.01], 0.83 [95%CI 0.73-0.96] and 0.87 [95%CI 0.71-1.06], respectively). Correcting for treatment crossover, we observed results in the 'real-life' setting of the AROii observational cohort that closely mirrored the results of the EVOLVE RCT. Persistence-corrected analyses revealed a trend towards reduced ACM in haemodialysis patients receiving cinacalcet therapy. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zamaere, Christine Berkesch; Griffeth, Stephen; Sam, Steven V.
2014-08-01
We show that for Jack parameter α = -( k + 1)/( r - 1), certain Jack polynomials studied by Feigin-Jimbo-Miwa-Mukhin vanish to order r when k + 1 of the coordinates coincide. This result was conjectured by Bernevig and Haldane, who proposed that these Jack polynomials are model wavefunctions for fractional quantum Hall states. Special cases of these Jack polynomials include the wavefunctions of Laughlin and Read-Rezayi. In fact, along these lines we prove several vanishing theorems known as clustering properties for Jack polynomials in the mathematical physics literature, special cases of which had previously been conjectured by Bernevig and Haldane. Motivated by the method of proof, which in the case r = 2 identifies the span of the relevant Jack polynomials with the S n -invariant part of a unitary representation of the rational Cherednik algebra, we conjecture that unitary representations of the type A Cherednik algebra have graded minimal free resolutions of Bernstein-Gelfand-Gelfand type; we prove this for the ideal of the ( k + 1)-equals arrangement in the case when the number of coordinates n is at most 2 k + 1. In general, our conjecture predicts the graded S n -equivariant Betti numbers of the ideal of the ( k + 1)-equals arrangement with no restriction on the number of ambient dimensions.
Excited state correlations of the finite Heisenberg chain
NASA Astrophysics Data System (ADS)
Pozsgay, Balázs
2017-02-01
We consider short range correlations in excited states of the finite XXZ and XXX Heisenberg spin chains. We conjecture that the known results for the factorized ground state correlations can be applied to the excited states too, if the so-called physical part of the construction is changed appropriately. For the ground state we derive simple algebraic expressions for the physical part; the formulas only use the ground state Bethe roots as an input. We conjecture that the same formulas can be applied to the excited states as well, if the exact Bethe roots of the excited states are used instead. In the XXZ chain the results are expected to be valid for all states (except certain singular cases where regularization is needed), whereas in the XXX case they only apply to singlet states or group invariant operators. Our conjectures are tested against numerical data from exact diagonalization and coordinate Bethe Ansatz calculations, and perfect agreement is found in all cases. In the XXX case we also derive a new result for the nearest-neighbour correlator < σ 1zσ 2z> , which is valid for non-singlet states as well. Our results build a bridge between the known theory of factorized correlations, and the recently conjectured TBA-like description for the building blocks of the construction.
Copula based flexible modeling of associations between clustered event times.
Geerdens, Candida; Claeskens, Gerda; Janssen, Paul
2016-07-01
Multivariate survival data are characterized by the presence of correlation between event times within the same cluster. First, we build multi-dimensional copulas with flexible and possibly symmetric dependence structures for such data. In particular, clustered right-censored survival data are modeled using mixtures of max-infinitely divisible bivariate copulas. Second, these copulas are fit by a likelihood approach where the vast amount of copula derivatives present in the likelihood is approximated by finite differences. Third, we formulate conditions for clustered right-censored survival data under which an information criterion for model selection is either weakly consistent or consistent. Several of the familiar selection criteria are included. A set of four-dimensional data on time-to-mastitis is used to demonstrate the developed methodology.
The Topp-Leone generalized Rayleigh cure rate model and its application
NASA Astrophysics Data System (ADS)
Nanthaprut, Pimwarat; Bodhisuwan, Winai; Patummasut, Mena
2017-11-01
Cure rate model is one of the survival analysis when model consider a proportion of the censored data. In clinical trials, the data represent time to recurrence of event or death of patients are used to improve the efficiency of treatments. Each dataset can be separated into two groups: censored and uncensored data. In this work, the new mixture cure rate model is introduced based on the Topp-Leone generalized Rayleigh distribution. The Bayesian approach is employed to estimate its parameters. In addition, a breast cancer dataset is analyzed for model illustration purpose. According to the deviance information criterion, the Topp-Leone generalized Rayleigh cure rate model shows better result than the Weibull and exponential cure rate models.
Heimann, G; Neuhaus, G
1998-03-01
In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.
Fast iterative censoring CFAR algorithm for ship detection from SAR images
NASA Astrophysics Data System (ADS)
Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng
2017-11-01
Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.
ERIC Educational Resources Information Center
Krueger, Alan; Rothstein, Jesse; Turner, Sarah
2006-01-01
In Grutter v. Bollinger (2003), Justice Sandra Day O'Connor conjectured that in 25 years affirmative action in college admissions will be unnecessary. We project the test score distribution of black and white college applicants 25 years from now, focusing on the role of black-white family income gaps. Economic progress alone is unlikely to narrow…
NASA Astrophysics Data System (ADS)
Yang, Run-Qiu; Niu, Chao; Zhang, Cheng-Yong; Kim, Keun-Young
2018-02-01
We compute the time-dependent complexity of the thermofield double states by four different proposals: two holographic proposals based on the "complexity-action" (CA) conjecture and "complexity-volume" (CV) conjecture, and two quantum field theoretic proposals based on the Fubini-Study metric (FS) and Finsler geometry (FG). We find that four different proposals yield both similarities and differences, which will be useful to deepen our understanding on the complexity and sharpen its definition. In particular, at early time the complexity linearly increase in the CV and FG proposals, linearly decreases in the FS proposal, and does not change in the CA proposal. In the late time limit, the CA, CV and FG proposals all show that the growth rate is 2 E/(πℏ) saturating the Lloyd's bound, while the FS proposal shows the growth rate is zero. It seems that the holographic CV conjecture and the field theoretic FG method are more correlated.
Constraints on axion inflation from the weak gravity conjecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudelius, Tom, E-mail: rudelius@physics.harvard.edu
2015-09-01
We derive constraints facing models of axion inflation based on decay constant alignment from a string-theoretic and quantum gravitational perspective. In particular, we investigate the prospects for alignment and 'anti-alignment' of C{sub 4} axion decay constants in type IIB string theory, deriving a strict no-go result in the latter case. We discuss the relationship of axion decay constants to the weak gravity conjecture and demonstrate agreement between our string-theoretic constraints and those coming from the 'generalized' weak gravity conjecture. Finally, we consider a particular model of decay constant alignment in which the potential of C{sub 4} axions in type IIBmore » compactifications on a Calabi-Yau three-fold is dominated by contributions from D7-branes, pointing out that this model evades some of the challenges derived earlier in our paper but is highly constrained by other geometric considerations.« less
Constraints on axion inflation from the weak gravity conjecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudelius, Tom
2015-09-08
We derive constraints facing models of axion inflation based on decay constant alignment from a string-theoretic and quantum gravitational perspective. In particular, we investigate the prospects for alignment and ‘anti-alignment’ of C{sub 4} axion decay constants in type IIB string theory, deriving a strict no-go result in the latter case. We discuss the relationship of axion decay constants to the weak gravity conjecture and demonstrate agreement between our string-theoretic constraints and those coming from the ‘generalized’ weak gravity conjecture. Finally, we consider a particular model of decay constant alignment in which the potential of C{sub 4} axions in type IIBmore » compactifications on a Calabi-Yau three-fold is dominated by contributions from D7-branes, pointing out that this model evades some of the challenges derived earlier in our paper but is highly constrained by other geometric considerations.« less
Hoop conjecture for colliding black holes
NASA Astrophysics Data System (ADS)
Ida, Daisuke; Nakao, Ken-Ichi; Siino, Masaru; Hayward, Sean A.
1998-12-01
We study the collision of black holes in the Kastor-Traschen space-time, at present the only such analytic solution. We investigate the dynamics of the event horizon in the case of the collision of two equal black holes, using the ray-tracing method. We confirm that the event horizon has trouser topology and show that its set of past end points (where the horizon is nonsmooth) is a spacelike curve resembling a seam of trousers. We show that this seam has a finite length and argue that twice this length be taken to define the minimal circumference C of the event horizon. Comparing with the asymptotic mass M, we find the inequality C<4πM supposed by the hoop conjecture, with both sides being of similar order, C~4πM. This supports the hoop conjecture as a guide to general gravitational collapse, even in the extreme case of head-on black-hole collisions.
Conjecturing via analogical reasoning constructs ordinary students into like gifted student
NASA Astrophysics Data System (ADS)
Supratman; Ratnaningsih, N.; Ryane, S.
2017-12-01
The purpose of this study is to reveal the development of knowledge of ordinary students to be like gifted students in the classroom based on Piaget's theory. In exposing it, students are given an open problem of classical analogy. Researchers explore students who conjecture via analogical reasoning in problem solving. Of the 32 students, through the method of think out loud and the interview was completed: 25 students conjecture via analogical reasoning. Of the 25 students, all of them have almost the same character in problem solving/knowledge construction. For that, a student is taken to analyze the thinking process while solving the problem/construction of knowledge based on Piaget's theory. Based on Piaget's theory in the development of the same knowledge, gifted students and ordinary students have similar structures in final equilibrium. They begin processing: assimilation and accommodation of problem, strategies, and relationships.
Stochastic Flux-Freezing in MHD Turbulence and Reconnection in the Heliosheath
NASA Astrophysics Data System (ADS)
Eyink, G. L.; Lalescu, C.; Vishniac, E.
2012-12-01
Fast reconnection of the sectored magnetic field in the heliosheath created by flapping of the heliospheric current sheet has been conjectured to accelerate anomalous cosmic rays and to create other signatures observed by the Voyager probes. The reconnecting flux structures could have sizes up to ˜100 AU, much larger than the ion cyclotron radius ˜10^3 km. Hence MHD should be valid at those scales. To account for rapid reconnection of such large-scale structures, we note that the high Reynolds numbers in the heliosheath for motions perpendicular to the magnetic field (Re ˜10^{14}) suggest transition to turbulence. The Lazarian-Vishnian theory of turbulent reconnection can account for the fast rates, but it implies a puzzling breakdown of magnetic flux-freezing in high-conductivity MHD plasmas. We address this paradox with a novel stochastic formulation of flux-freezing for resistive MHD and a numerical Lagrangian study with a spacetime database of MHD turbulence. We report the first observation of Richardson diffusion in MHD turbulence, which leads to "spontaneous stochasticity" of the Lagrangian trajectories and a violation of standard flux-freezing by many orders of magnitude. The work supports a prediction by Lazarian-Opher (2009) of extended thick reconnection zones within the heliosheath, perhaps up to an AU across, although the microscale reconnection events within these zones would have thickness of order the ion cyclotron radius and be described by kinetic Vlasov theory.
Stochastic Flux-Freezing in MHD Turbulence and Reconnection in the Heliosheath (Invited)
NASA Astrophysics Data System (ADS)
Eyink, G. L.; Lalescu, C. C.; Vishniac, E. T.
2013-12-01
Fast reconnection of the sectored magnetic field in the heliosheath created by flapping of the heliospheric current sheet has been conjectured to accelerate anomalous cosmic rays and to create other signatures observed by the Voyager probes. The reconnecting flux structures could have sizes up to ˜100 AU, much larger than the ion cyclotron radius ˜103 km. Hence MHD should be valid at those scales. To account for rapid reconnection of such large-scale structures, we note that the high Reynolds numbers in the heliosheath for motions perpendicular to the magnetic field (Re ˜1014) suggest transition to turbulence. The Lazarian-Vishnian theory of turbulent reconnection can account for the fast rates, but it implies a puzzling breakdown of magnetic flux-freezing in high-conductivity MHD plasmas. We address this paradox with a novel stochastic formulation of flux-freezing for resistive MHD and a numerical Lagrangian study with a spacetime database of MHD turbulence. We report the first observation of Richardson diffusion in MHD turbulence, which leads to 'spontaneous stochasticity' of the Lagrangian trajectories and a violation of standard flux- freezing by many orders of magnitude. The work supports a prediction by Lazarian-Opher (2009) of extended thick reconnection zones within the heliosheath, perhaps up to an AU across, although the microscale reconnection events within these zones would have thickness of order the ion cyclotron radius and be described by kinetic Vlasov theory.
Li, Jiahui; Yu, Qiqing
2016-01-01
Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.
Kang, Le; Chen, Weijie; Petrick, Nicholas A.; Gallas, Brandon D.
2014-01-01
The area under the receiver operating characteristic (ROC) curve (AUC) is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of AUC, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. PMID:25399736
Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène
2015-03-01
Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.
On prognostic models, artificial intelligence and censored observations.
Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A
2001-03-01
The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.
Pal, Suvra; Balakrishnan, N
2017-10-01
In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.
Casey, Michael Jin; Wen, Xuerong; Rehman, Shehzad; Santos, Alfonso H; Andreoni, Kenneth A
2015-04-01
The OPTN/UNOS Kidney Paired Donation (KPD) Pilot Program allocates priority to zero-HLA mismatches. However, in unrelated living donor kidney transplants (LDKT)-the same donor source in KPD-no study has shown whether zero-HLA mismatches provide any advantage over >0 HLA mismatches. We hypothesize that zero-HLA mismatches among unrelated LDKT do not benefit graft survival. This retrospective SRTR database study analyzed LDKT recipients from 1987 to 2012. Among unrelated LDKT, subjects with zero-HLA mismatches were compared to a 1:1-5 matched (by donor age ±1 year and year of transplantation) control cohort with >0 HLA mismatches. The primary endpoint was death-censored graft survival. Among 32,654 unrelated LDKT recipients, 83 had zero-HLA mismatches and were matched to 407 controls with >0 HLA mismatches. Kaplan-Meier analyses for death-censored graft and patient survival showed no difference between study and control cohorts. In multivariate marginal Cox models, zero-HLA mismatches saw no benefit with death-censored graft survival (HR = 1.46, 95% CI 0.78-2.73) or patient survival (HR = 1.43, 95% CI 0.68-3.01). Our data suggest that in unrelated LDKT, zero-HLA mismatches may not offer any survival advantage. Therefore, particular study of zero-HLA mismatching is needed to validate its place in the OPTN/UNOS KPD Pilot Program allocation algorithm. © 2014 Steunstichting ESOT.
Estimating the effect of a rare time-dependent treatment on the recurrent event rate.
Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E
2018-05-30
In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.
Random left censoring: a second look at bone lead concentration measurements
NASA Astrophysics Data System (ADS)
Popovic, M.; Nie, H.; Chettle, D. R.; McNeill, F. E.
2007-09-01
Bone lead concentrations measured in vivo by x-ray fluorescence (XRF) are subjected to left censoring due to limited precision of the technique at very low concentrations. In the analysis of bone lead measurements, inverse variance weighting (IVW) of measurements is commonly used to estimate the mean of a data set and its standard error. Student's t-test is used to compare the IVW means of two sets, testing the hypothesis that the two sets are from the same population. This analysis was undertaken to assess the adequacy of IVW in the analysis of bone lead measurements or to confirm the results of IVW using an independent approach. The rationale is provided for the use of methods of survival data analysis in the study of XRF bone lead measurements. The procedure is provided for bone lead data analysis using the Kaplan-Meier and Nelson-Aalen estimators. The methodology is also outlined for the rank tests that are used to determine whether two censored sets are from the same population. The methods are applied on six data sets acquired in epidemiological studies. The estimated parameters and test statistics were compared with the results of the IVW approach. It is concluded that the proposed methods of statistical analysis can provide valid inference about bone lead concentrations, but the computed parameters do not differ substantially from those derived by the more widely used method of IVW.
Cluster Adjacency Properties of Scattering Amplitudes in N =4 Supersymmetric Yang-Mills Theory
NASA Astrophysics Data System (ADS)
Drummond, James; Foster, Jack; Gürdoǧan, Ömer
2018-04-01
We conjecture a new set of analytic relations for scattering amplitudes in planar N =4 super Yang-Mills theory. They generalize the Steinmann relations and are expressed in terms of the cluster algebras associated to Gr (4 ,n ). In terms of the symbol, they dictate which letters can appear consecutively. We study heptagon amplitudes and integrals in detail and present symbols for previously unknown integrals at two and three loops which support our conjecture.
Syed, Hamzah; Jorgensen, Andrea L; Morris, Andrew P
2016-06-01
To evaluate the power to detect associations between SNPs and time-to-event outcomes across a range of pharmacogenomic study designs while comparing alternative regression approaches. Simulations were conducted to compare Cox proportional hazards modeling accounting for censoring and logistic regression modeling of a dichotomized outcome at the end of the study. The Cox proportional hazards model was demonstrated to be more powerful than the logistic regression analysis. The difference in power between the approaches was highly dependent on the rate of censoring. Initial evaluation of single-nucleotide polymorphism association signals using computationally efficient software with dichotomized outcomes provides an effective screening tool for some design scenarios, and thus has important implications for the development of analytical protocols in pharmacogenomic studies.
NASA Astrophysics Data System (ADS)
Rupel, Dylan
2015-03-01
The first goal of this note is to extend the well-known Feigin homomorphisms taking quantum groups to quantum polynomial algebras. More precisely, we define generalized Feigin homomorphisms from a quantum shuffle algebra to quantum polynomial algebras which extend the classical Feigin homomorphisms along the embedding of the quantum group into said quantum shuffle algebra. In a recent work of Berenstein and the author, analogous extensions of Feigin homomorphisms from the dual Hall-Ringel algebra of a valued quiver to quantum polynomial algebras were defined. To relate these constructions, we establish a homomorphism, dubbed the quantum shuffle character, from the dual Hall-Ringel algebra to the quantum shuffle algebra which relates the generalized Feigin homomorphisms. These constructions can be compactly described by a commuting tetrahedron of maps beginning with the quantum group and terminating in a quantum polynomial algebra. The second goal in this project is to better understand the dual canonical basis conjecture for skew-symmetrizable quantum cluster algebras. In the symmetrizable types it is known that dual canonical basis elements need not have positive multiplicative structure constants, while this is still suspected to hold for skew-symmetrizable quantum cluster algebras. We propose an alternate conjecture for the symmetrizable types: the cluster monomials should correspond to irreducible characters of a KLR algebra. Indeed, the main conjecture of this note would establish this ''KLR conjecture'' for acyclic skew-symmetrizable quantum cluster algebras: that is, we conjecture that the images of rigid representations under the quantum shuffle character give irreducible characters for KLR algebras. We sketch a proof in the symmetric case giving an alternative to the proof of Kimura-Qin that all non-initial cluster variables in an acyclic skew-symmetric quantum cluster algebra are contained in the dual canonical basis. With these results in mind we interpret the cluster mutations directly in terms of the representation theory of the KLR algebra.
Optimal time travel in the Gödel universe
NASA Astrophysics Data System (ADS)
Natário, José
2012-04-01
Using the theory of optimal rocket trajectories in general relativity, recently developed in Henriques and Natário (2011), we present a candidate for the minimum total integrated acceleration closed timelike curve in the Gödel universe, and give evidence for its minimality. The total integrated acceleration of this curve is lower than Malament's conjectured value (Malament 1984), as was already implicit in the work of Manchak (Gen. Relativ. Gravit. 51-60, 2011); however, Malament's conjecture does seem to hold for periodic closed timelike curves.
Foundations for a theory of gravitation theories
NASA Technical Reports Server (NTRS)
Thorne, K. S.; Lee, D. L.; Lightman, A. P.
1972-01-01
A foundation is laid for future analyses of gravitation theories. This foundation is applicable to any theory formulated in terms of geometric objects defined on a 4-dimensional spacetime manifold. The foundation consists of (1) a glossary of fundamental concepts; (2) a theorem that delineates the overlap between Lagrangian-based theories and metric theories; (3) a conjecture (due to Schiff) that the Weak Equivalence Principle implies the Einstein Equivalence Principle; and (4) a plausibility argument supporting this conjecture for the special case of relativistic, Lagrangian-based theories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koiller, Jair; Boatto, Stefanella
2009-05-06
A pair of infinitesimally close opposite vortices moving on a curved surface moves along a geodesic, according to a conjecture by Kimura. We outline a proof. Numerical simulations are presented for a pair of opposite vortices at a close but nonzero distance on a surface of revolution, the catenoid. We conjecture that the vortex pair system on a triaxial ellipsoid is a KAM perturbation of Jacobi's geodesic problem. We outline some preliminary calculations required for this study. Finding the surfaces for which the vortex pair system is integrable is in order.
On axionic field ranges, loopholes and the weak gravity conjecture
Brown, Jon; Cottrell, William; Shiu, Gary; ...
2016-04-05
Here, we clarify some aspects of the impact that the Weak Gravity Conjecture has on models of (generalized) natural inflation. In particular we address certain technical and conceptual concerns recently raised regarding the stringent constraints and conclusions found in our previous work. We also point out the difficulties faced by attempts to evade these constraints. Furthermore, these new considerations improve the understanding of the quantum gravity constraints we found and further support the conclusion that it remains challenging for axions to drive natural inflation.
Modeling shock waves in an ideal gas: combining the Burnett approximation and Holian's conjecture.
He, Yi-Guang; Tang, Xiu-Zhang; Pu, Yi-Kang
2008-07-01
We model a shock wave in an ideal gas by combining the Burnett approximation and Holian's conjecture. We use the temperature in the direction of shock propagation rather than the average temperature in the Burnett transport coefficients. The shock wave profiles and shock thickness are compared with other theories. The results are found to agree better with the nonequilibrium molecular dynamics (NEMD) and direct simulation Monte Carlo (DSMC) data than the Burnett equations and the modified Navier-Stokes theory.
Boelter, Fred; Simmons, Catherine; Hewett, Paul
2011-04-01
Fluid sealing devices (gaskets and packing) containing asbestos are manufactured and blended with binders such that the asbestos fibers are locked in a matrix that limits the potential for fiber release. Occasionally, fluid sealing devices fail and need to be replaced or are removed during preventive maintenance activities. This is the first study known to pool over a decade's worth of exposure assessments involving fluid sealing devices used in a variety of applications. Twenty-one assessments of work activities and air monitoring were performed under conditions with no mechanical ventilation and work scenarios described as "worst-case" conditions. Frequently, the work was conducted using aggressive techniques, along with dry removal practices. Personal and area samples were collected and analyzed in accordance with the National Institute for Occupational Safety and Health Methods 7400 and 7402. A total of 782 samples were analyzed by phase contrast microscopy, and 499 samples were analyzed by transmission electron microscopy. The statistical data analysis focused on the overall data sets which were personal full-shift time-weighted average (TWA) exposures, personal 30-min exposures, and area full-shift TWA values. Each data set contains three estimates of exposure: (1) total fibers; (2) asbestos fibers only but substituting a value of 0.0035 f/cc for censored data; and (3) asbestos fibers only but substituting the limit of quantification value for censored data. Censored data in the various data sets ranged from 7% to just over 95%. Because all the data sets were censored, the geometric mean and geometric standard deviation were estimated using the maximum likelihood estimation method. Nonparametric, Kaplan-Meier, and lognormal statistics were applied and found to be consistent and reinforcing. All three sets of statistics suggest that the mean and median exposures were less than 25% of 0.1 f/cc 8-hr TWA sample or 1.0 f/cc 30-min samples, and that there is at least 95% confidence that the true 95th percentile exposures are less than 0.1 f/cc as an 8-hr TWA.
Ali, M Sanni; Groenwold, Rolf H H; Belitser, Svetlana V; Souverein, Patrick C; Martín, Elisa; Gatto, Nicolle M; Huerta, Consuelo; Gardarsdottir, Helga; Roes, Kit C B; Hoes, Arno W; de Boer, Antonius; Klungel, Olaf H
2016-03-01
Observational studies including time-varying treatments are prone to confounding. We compared time-varying Cox regression analysis, propensity score (PS) methods, and marginal structural models (MSMs) in a study of antidepressant [selective serotonin reuptake inhibitors (SSRIs)] use and the risk of hip fracture. A cohort of patients with a first prescription for antidepressants (SSRI or tricyclic antidepressants) was extracted from the Dutch Mondriaan and Spanish Base de datos para la Investigación Farmacoepidemiológica en Atención Primaria (BIFAP) general practice databases for the period 2001-2009. The net (total) effect of SSRI versus no SSRI on the risk of hip fracture was estimated using time-varying Cox regression, stratification and covariate adjustment using the PS, and MSM. In MSM, censoring was accounted for by inverse probability of censoring weights. The crude hazard ratio (HR) of SSRI use versus no SSRI use on hip fracture was 1.75 (95%CI: 1.12, 2.72) in Mondriaan and 2.09 (1.89, 2.32) in BIFAP. After confounding adjustment using time-varying Cox regression, stratification, and covariate adjustment using the PS, HRs increased in Mondriaan [2.59 (1.63, 4.12), 2.64 (1.63, 4.25), and 2.82 (1.63, 4.25), respectively] and decreased in BIFAP [1.56 (1.40, 1.73), 1.54 (1.39, 1.71), and 1.61 (1.45, 1.78), respectively]. MSMs with stabilized weights yielded HR 2.15 (1.30, 3.55) in Mondriaan and 1.63 (1.28, 2.07) in BIFAP when accounting for censoring and 2.13 (1.32, 3.45) in Mondriaan and 1.66 (1.30, 2.12) in BIFAP without accounting for censoring. In this empirical study, differences between the different methods to control for time-dependent confounding were small. The observed differences in treatment effect estimates between the databases are likely attributable to different confounding information in the datasets, illustrating that adequate information on (time-varying) confounding is crucial to prevent bias. Copyright © 2016 John Wiley & Sons, Ltd.
Solutions to time variant problems of real-time expert systems
NASA Technical Reports Server (NTRS)
Yeh, Show-Way; Wu, Chuan-Lin; Hung, Chaw-Kwei
1988-01-01
Real-time expert systems for monitoring and control are driven by input data which changes with time. One of the subtle problems of this field is the propagation of time variant problems from rule to rule. This propagation problem is even complicated under a multiprogramming environment where the expert system may issue test commands to the system to get data and to access time consuming devices to retrieve data for concurrent reasoning. Two approaches are used to handle the flood of input data. Snapshots can be taken to freeze the system from time to time. The expert system treats the system as a stationary one and traces changes by comparing consecutive snapshots. In the other approach, when an input is available, the rules associated with it are evaluated. For both approaches, if the premise condition of a fired rule is changed to being false, the downstream rules should be deactivated. If the status change is due to disappearance of a transient problem, actions taken by the fired downstream rules which are no longer true may need to be undone. If a downstream rule is being evaluated, it should not be fired. Three mechanisms for solving this problem are discussed: tracing, backward checking, and censor setting. In the forward tracing mechanism, when the premise conditions of a fired rule become false, the premise conditions of downstream rules which have been fired or are being evaluated due to the firing of that rule are reevaluated. A tree with its root at the rule being deactivated is traversed. In the backward checking mechanism, when a rule is being fired, the expert system checks back on the premise conditions of the upstream rules that result in evaluation of the rule to see whether it should be fired. The root of the tree being traversed is the rule being fired. In the censor setting mechanism, when a rule is to be evaluated, a censor is constructed based on the premise conditions of the upstream rules and the censor is evaluated just before the rule is fired. Unlike the backward checking mechanism, this one does not search the upstream rules. This paper explores the details of implementation of the three mechanisms.
Index formulas for higher order Loewner vector fields
NASA Astrophysics Data System (ADS)
Broad, Steven
Let ∂ be the Cauchy-Riemann operator and f be a C real-valued function in a neighborhood of 0 in R in which ∂z¯nf≠0 for all z≠0. In such cases, ∂z¯nf is known as a Loewner vector field due to its connection with Loewner's conjecture that the index of such a vector field is bounded above by n. The n=2 case of Loewner's conjecture implies Carathéodory's conjecture that any C-immersion of S into R must have at least two umbilics. Recent work of F. Xavier produced a formula for computing the index of Loewner vector fields when n=2 using data about the Hessian of f. In this paper, we extend this result and establish an index formula for ∂z¯nf for all n⩾2. Structurally, our index formula provides a defect term, which contains geometric data extracted from Hessian-like objects associated with higher order derivatives of f.
The Paradoxical Role of the Research Administrator.
ERIC Educational Resources Information Center
White, Virginia P.
1991-01-01
This reprinted 1970 article examines the role of the university research administrator and finds that the role involves paradoxes between controller and entrepreneur, master and slave, censor and publicist, and traditionalist and innovator. (DB)
Proportional hazards model with varying coefficients for length-biased data.
Zhang, Feipeng; Chen, Xuerong; Zhou, Yong
2014-01-01
Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.
Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions
Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.
2012-01-01
In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661
Atrazine concentrations in near-surface aquifers: A censored regression approach
Liu, S.; Yen, S.T.; Kolpin, D.W.
1996-01-01
In 1991, the U.S. Geological Survey (USGS) conducted a study to investigate the occurrence of atrazine (2-chloro-4-ethylamino-6- isopropylamino-s-triazine) and other agricultural chemicals in near-surface aquifers in the midcontinental USA. Because about 83% of the atrazine concentrations from the USGS study were censored, standard statistical estimation procedures could not be used. To determine factors that affect atrazine concentrations in groundwater while accommodating the high degree of data censoring. Tobit models were used (normal homoscedastic, normal heteroscedastic, lognormal homoscedastic, and lognormal heteroscedastic). Empirical results suggest that the lognormal heteroscedastic Tobit model is the model of choice for this type of study. This model determined the following factors to have the strongest effect on atrazine concentrations in groundwater: percent of pasture within 3.2 km, percent of forest within 3.2 km (2 mi), mean open interval of the well, primary water use of a well, aquifer class (unconsolidated or bedrock), aquifer type (unconfined or confined), existence of a stream within 30 m (100 ft), existence of a stream within 30 m to 0.4 km (0.25 mi), and existence of a stream within 0.4 to 3.2 km. Examining the elasticities of the continuous explanatory factors provides further insight into their effects on atrazine concentrations in groundwater. This study documents a viable statistical method that can be used to accommodate the complicating presence of censured data, a feature that commonly occurs in environmental data.
Sguassero, Yanina; Roberts, Karen N; Harvey, Guillermina B; Comandé, Daniel; Ciapponi, Agustín; Cuesta, Cristina B; Danesi, Emmaría; Aguiar, Camila; Andrade, Ana L; Castro, Ana Mde; Lana, Marta de; Escribà, Josep M; Fabbro, Diana L; Fernandes, Cloé D; Meira, Wendell Sf; Flores-Chávez, María; Hasslocher-Moreno, Alejandro M; Jackson, Yves; Lacunza, Carlos D; Machado-de-Assis, Girley F; Maldonado, Marisel; Monje-Rumi, María M; Molina, Israel; Martín, Catalina Muñoz-San; Murcia, Laura; Castro, Cleudson Nery de; Silveira, Celeste An; Negrette, Olga Sánchez; Segovia, Manuel; Solari, Aldo; Steindel, Mário; Streiger, Mirtha L; Bilbao, Ninfa Vera de; Zulantay, Inés; Sosa-Estani, Sergio
2018-06-04
To determine the course of serological tests in subjects with chronic T. cruzi infection treated with antitrypanosomal drugs. We conducted a systematic review and meta-analysis using individual participant data. Survival analysis and Cox proportional hazards regression model with a random effect to adjust for covariates were applied. The protocol was registered at www.crd.york.ac.uk/PROSPERO (CRD42012002162). We included 27 studies (1296 subjects) conducted in eight countries. The risk of bias was low for all domains in 17 studies (63.0%). We assessed 913 subjects (149 seroreversion events, 83.7% censored data) for ELISA, 670 subjects (134 events, 80.0% censored) for IIF, and 548 subjects (99 events, 82.0% censored) for IHA. A higher probability of seroreversion was observed in subjects aged 1-19 years compared to adults at a shorter time span. The chance of seroreversion also varied according to the country where the infection might have been acquired. For instance, the pooled adjusted hazard ratio between children/adolescents and adults for IIF test was 1.54 (95% CI 0.64-3.71) and 9.37 (3.44-25.50) in some countries of South America and Brazil, respectively. The disappearance of anti-T. cruzi antibodies was demonstrated along the follow-up. An interaction between age at treatment and country setting was shown. Copyright © 2018. Published by Elsevier Ltd.
Kang, Le; Chen, Weijie; Petrick, Nicholas A; Gallas, Brandon D
2015-02-20
The area under the receiver operating characteristic curve is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of area under the receiver operating characteristic curve, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics-based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. Copyright © 2014 John Wiley & Sons, Ltd.
Wall, Michael; Zamba, Gideon K D; Artes, Paul H
2018-01-01
It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.
Modeling time-to-event (survival) data using classification tree analysis.
Linden, Ariel; Yarnold, Paul R
2017-12-01
Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.
Entropy Inequality Violations from Ultraspinning Black Holes.
Hennigar, Robie A; Mann, Robert B; Kubizňák, David
2015-07-17
We construct a new class of rotating anti-de Sitter (AdS) black hole solutions with noncompact event horizons of finite area in any dimension and study their thermodynamics. In four dimensions these black holes are solutions to gauged supergravity. We find that their entropy exceeds the maximum implied from the conjectured reverse isoperimetric inequality, which states that for a given thermodynamic volume, the black hole entropy is maximized for Schwarzschild-AdS space. We use this result to suggest more stringent conditions under which this conjecture may hold.
Cosmic Rays: "A Thin Rain of Charged Particles."
ERIC Educational Resources Information Center
Friedlander, Michael
1990-01-01
Discussed are balloons and electroscopes, understanding cosmic rays, cosmic ray paths, isotopes and cosmic-ray travel, sources of cosmic rays, and accelerating cosmic rays. Some of the history of the discovery and study of cosmic rays is presented. (CW)
From Roswell to Richmond...To Your Town
ERIC Educational Resources Information Center
McShean, Gordon
1970-01-01
The library profession must stop playing around with the supposed causes of censorship and address itself to the principle of intellectual freedom. Librarians should confront the censors by exposing their tactics before they use them. (Author/JS)
Variability of Clouds Over a Solar Cycle
NASA Technical Reports Server (NTRS)
Yung, Yuk L.
2002-01-01
One of the most controversial aspects of climate studies is the debate over the natural and anthropogenic causes of climate change. Historical data strongly suggest that the Little Ice Age (from 1550 to 1850 AD when the mean temperature was colder by about 1 C) was most likely caused by variability of the sun and not greenhouse molecules (e.g., CO2). However, the known variability in solar irradiance and modulation of cosmic rays provides too little energy, by many orders of magnitude, to lead to climate changes in the troposphere. The conjecture is that there is a 'trigger mechanism'. This idea may now be subjected to a quantitative test using recent global datasets. Using the best available modern cloud data from International Satellite Cloud Climatology Project (ISCCP), Svensmark and Friis-Christensen found a correlation of a large variation (3-4%) in global cloud cover with the solar cycle. The work has been extended by Svensmark and Marsh and Svensmark. The implied forcing on climate is an order of magnitude greater than any previous claims. Are clouds the long sought trigger mechanism? This discovery is potentially so important that it should be corroborated by an independent database, and, furthermore, it must be shown that alternative explanations (i.e., El Nino) can be ruled out. We used the ISCCP data in conjunction with the Total Ozone Mapping Spectrometer (TOMS) data to carry out in in depth study of the cloud trigger mechanism.
WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING
Saegusa, Takumi; Wellner, Jon A.
2013-01-01
We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559
Akazawa, K; Nakamura, T; Moriguchi, S; Shimada, M; Nose, Y
1991-07-01
Small sample properties of the maximum partial likelihood estimates for Cox's proportional hazards model depend on the sample size, the true values of regression coefficients, covariate structure, censoring pattern and possibly baseline hazard functions. Therefore, it would be difficult to construct a formula or table to calculate the exact power of a statistical test for the treatment effect in any specific clinical trial. The simulation program, written in SAS/IML, described in this paper uses Monte-Carlo methods to provide estimates of the exact power for Cox's proportional hazards model. For illustrative purposes, the program was applied to real data obtained from a clinical trial performed in Japan. Since the program does not assume any specific function for the baseline hazard, it is, in principle, applicable to any censored survival data as long as they follow Cox's proportional hazards model.
Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim
2017-06-15
Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.
Wolfinger, Nicholas H
2011-05-01
Many studies have demonstrated that the children of divorce are disproportionately likely to end their own marriages. In previous work, I showed that the transmission of divorce between generations weakened substantially for General Social Survey (GSS) respondents interviewed between 1973 and 1996 (Wolfinger 1999); Li and Wu (2006, 2008) contended that my finding is a methodological artifact of the GSS's lack of marriage duration data. This article presents a completed-cohort approach to studying divorce using the GSS. The results confirm a decline in the probability of divorce transmission that cannot be explained by the right-censoring bias alleged by Li and Wu. This finding contributes to an ongoing debate about trends in the negative consequences of parental divorce, as well as demonstrating a useful approach to right-censored phenomena when event history data are not available.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Analysis of cigarette purchase task instrument data with a left-censored mixed effects model.
Liao, Wenjie; Luo, Xianghua; Le, Chap T; Chu, Haitao; Epstein, Leonard H; Yu, Jihnhee; Ahluwalia, Jasjit S; Thomas, Janet L
2013-04-01
The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. Although a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug's RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, for example, 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method, and future directions of research are also discussed.
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
Analysis of Cigarette Purchase Task Instrument Data with a Left-Censored Mixed Effects Model
Liao, Wenjie; Luo, Xianghua; Le, Chap; Chu, Haitao; Epstein, Leonard H.; Yu, Jihnhee; Ahluwalia, Jasjit S.; Thomas, Janet L.
2015-01-01
The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. While a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug’s RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, e.g. 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method and future directions of research are also discussed. PMID:23356731
Complete hazard ranking to analyze right-censored data: An ALS survival study.
Huang, Zhengnan; Zhang, Hongjiu; Boss, Jonathan; Goutman, Stephen A; Mukherjee, Bhramar; Dinov, Ivo D; Guan, Yuanfang
2017-12-01
Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS) Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.
Bernhardt, Paul W.; Zhang, Daowen; Wang, Huixia Judy
2014-01-01
Joint modeling techniques have become a popular strategy for studying the association between a response and one or more longitudinal covariates. Motivated by the GenIMS study, where it is of interest to model the event of survival using censored longitudinal biomarkers, a joint model is proposed for describing the relationship between a binary outcome and multiple longitudinal covariates subject to detection limits. A fast, approximate EM algorithm is developed that reduces the dimension of integration in the E-step of the algorithm to one, regardless of the number of random effects in the joint model. Numerical studies demonstrate that the proposed approximate EM algorithm leads to satisfactory parameter and variance estimates in situations with and without censoring on the longitudinal covariates. The approximate EM algorithm is applied to analyze the GenIMS data set. PMID:25598564
A quantile regression model for failure-time data with time-dependent covariates
Gorfine, Malka; Goldberg, Yair; Ritov, Ya’acov
2017-01-01
Summary Since survival data occur over time, often important covariates that we wish to consider also change over time. Such covariates are referred as time-dependent covariates. Quantile regression offers flexible modeling of survival data by allowing the covariates to vary with quantiles. This article provides a novel quantile regression model accommodating time-dependent covariates, for analyzing survival data subject to right censoring. Our simple estimation technique assumes the existence of instrumental variables. In addition, we present a doubly-robust estimator in the sense of Robins and Rotnitzky (1992, Recovery of information and adjustment for dependent censoring using surrogate markers. In: Jewell, N. P., Dietz, K. and Farewell, V. T. (editors), AIDS Epidemiology. Boston: Birkhaäuser, pp. 297–331.). The asymptotic properties of the estimators are rigorously studied. Finite-sample properties are demonstrated by a simulation study. The utility of the proposed methodology is demonstrated using the Stanford heart transplant dataset. PMID:27485534
Risk-adjusted monitoring of survival times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sego, Landon H.; Reynolds, Marion R.; Woodall, William H.
2009-02-26
We consider the monitoring of clinical outcomes, where each patient has a di®erent risk of death prior to undergoing a health care procedure.We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that may be right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is moremore » efficient at detecting a sudden decrease in the odds of death than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We also discuss the implementation of a prospective monitoring scheme using the RAST CUSUM chart.« less
Maxwell's conjecture on three point charges with equal magnitudes
NASA Astrophysics Data System (ADS)
Tsai, Ya-Lun
2015-08-01
Maxwell's conjecture on three point charges states that the number of non-degenerate equilibrium points of the electrostatic field generated by them in R3 is at most four. We prove the conjecture in the cases when three point charges have equal magnitudes and show the number of isolated equilibrium points can only be zero, two, three, or four. Specifically, fixing positions of two positive charges in R3, we know exactly where to place the third positive charge to have two, three, or four equilibrium points. All equilibrium points are isolated and there are no other possibilities for the number of isolated equilibrium points. On the other hand, if both two of the fixed charges have negative charge values, there are always two equilibrium points except when the third positive charge lies in the line segment connecting the two negative charges. The exception cases are when the field contains only a curve of equilibrium points. In this paper, computations assisted by computer involve symbolic and exact integer computations. Therefore, all the results are proved rigorously.
Conjecture about the 2-Flavour QCD Phase Diagram
NASA Astrophysics Data System (ADS)
Nava Blanco, M. A.; Bietenholz, W.; Fernández Téllez, A.
2017-10-01
The QCD phase diagram, in particular its sector of high baryon density, is one of the most prominent outstanding mysteries within the Standard Model of particle physics. We sketch a project how to arrive at a conjecture for the case of two massless quark flavours. The pattern of spontaneous chiral symmetry breaking is isomorphic to the spontaneous magnetisation in an O(4) non-linear σ-model, which can be employed as a low-energy effective theory to study the critical behaviour. We focus on the 3d O(4) model, where the configurations are divided into topological sectors, as in QCD. A topological winding with minimal Euclidean action is denoted as a skyrmion, and the topological charge corresponds to the QCD baryon number. This effective model can be simulated on a lattice with a powerful cluster algorithm, which should allow us to identify the features of the critical temperature, as we proceed from low to high baryon density. In this sense, this projected numerical study has the potential to provide us with a conjecture about the phase diagram of QCD with two massless quark flavours.
NASA Astrophysics Data System (ADS)
Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.
2002-03-01
Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.
Zamba, Gideon K. D.; Artes, Paul H.
2018-01-01
Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822
On the structure of self-affine convex bodies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voynov, A S
2013-08-31
We study the structure of convex bodies in R{sup d} that can be represented as a union of their affine images with no common interior points. Such bodies are called self-affine. Vallet's conjecture on the structure of self-affine bodies was proved for d = 2 by Richter in 2011. In the present paper we disprove the conjecture for all d≥3 and derive a detailed description of self-affine bodies in R{sup 3}. Also we consider the relation between properties of self-affine bodies and functional equations with a contraction of an argument. Bibliography: 10 titles.
NASA Astrophysics Data System (ADS)
Mehedi Faruk, Mir; Muktadir Rahman, Md
2016-03-01
The well known relation for ideal classical gas $\\Delta \\epsilon^2=kT^2 C_V$ which does not remain valid for quantum system is revisited. A new connection is established between energy fluctuation and specific heat for quantum gases, valid in the classical limit and the degenerate quantum regime as well. Most importantly the proposed Biswas-Mitra-Bhattacharyya (BMB) conjecture (Biswas $et.$ $al.$, J. Stat. Mech. P03013, 2015.) relating hump in energy fluctuation and discontinuity of specific heat is proved and precised in this manuscript.
A decomposition theory for phylogenetic networks and incompatible characters.
Gusfield, Dan; Bansal, Vikas; Bafna, Vineet; Song, Yun S
2007-12-01
Phylogenetic networks are models of evolution that go beyond trees, incorporating non-tree-like biological events such as recombination (or more generally reticulation), which occur either in a single species (meiotic recombination) or between species (reticulation due to lateral gene transfer and hybrid speciation). The central algorithmic problems are to reconstruct a plausible history of mutations and non-tree-like events, or to determine the minimum number of such events needed to derive a given set of binary sequences, allowing one mutation per site. Meiotic recombination, reticulation and recurrent mutation can cause conflict or incompatibility between pairs of sites (or characters) of the input. Previously, we used "conflict graphs" and "incompatibility graphs" to compute lower bounds on the minimum number of recombination nodes needed, and to efficiently solve constrained cases of the minimization problem. Those results exposed the structural and algorithmic importance of the non-trivial connected components of those two graphs. In this paper, we more fully develop the structural importance of non-trivial connected components of the incompatibility and conflict graphs, proving a general decomposition theorem (Gusfield and Bansal, 2005) for phylogenetic networks. The decomposition theorem depends only on the incompatibilities in the input sequences, and hence applies to many types of phylogenetic networks, and to any biological phenomena that causes pairwise incompatibilities. More generally, the proof of the decomposition theorem exposes a maximal embedded tree structure that exists in the network when the sequences cannot be derived on a perfect phylogenetic tree. This extends the theory of perfect phylogeny in a natural and important way. The proof is constructive and leads to a polynomial-time algorithm to find the unique underlying maximal tree structure. We next examine and fully solve the major open question from Gusfield and Bansal (2005): Is it true that for every input there must be a fully decomposed phylogenetic network that minimizes the number of recombination nodes used, over all phylogenetic networks for the input. We previously conjectured that the answer is yes. In this paper, we show that the answer in is no, both for the case that only single-crossover recombination is allowed, and also for the case that unbounded multiple-crossover recombination is allowed. The latter case also resolves a conjecture recently stated in (Huson and Klopper, 2007) in the context of reticulation networks. Although the conjecture from Gusfield and Bansal (2005) is disproved in general, we show that the answer to the conjecture is yes in several natural special cases, and establish necessary combinatorial structure that counterexamples to the conjecture must possess. We also show that counterexamples to the conjecture are rare (for the case of single-crossover recombination) in simulated data.
The banning of Frederick Wiseman's movie, "Titicut Follies"--censoring the evidence.
Neuhauser, Duncan
2003-01-01
Poor quality of care in the form of indifference and indignity can be seen as the filmmaker Frederick Wiseman shows in his classic documentary, "Titicut Follies," which was banned from public showing for 25 years.
Development and evaluation of a composite risk score to predict kidney transplant failure.
Moore, Jason; He, Xiang; Shabir, Shazia; Hanvesakul, Rajesh; Benavente, David; Cockwell, Paul; Little, Mark A; Ball, Simon; Inston, Nicholas; Johnston, Atholl; Borrows, Richard
2011-05-01
Although risk factors for kidney transplant failure are well described, prognostic risk scores to estimate risk in prevalent transplant recipients are limited. Development and validation of risk-prediction instruments. The development data set included 2,763 prevalent patients more than 12 months posttransplant enrolled into the LOTESS (Long Term Efficacy and Safety Surveillance) Study. The validation data set included 731 patients who underwent transplant at a single UK center. Estimated glomerular filtration rate (eGFR) and other risk factors were evaluated using Cox regression. Scores for death-censored and overall transplant failure were based on the summed hazard ratios for baseline predictor variables. Predictive performance was assessed using calibration (Hosmer-Lemeshow statistic), discrimination (C statistic), and clinical reclassification (net reclassification improvement) compared with eGFR alone. In the development data set, 196 patients died and another 225 experienced transplant failure. eGFR, recipient age, race, serum urea and albumin levels, declining eGFR, and prior acute rejection predicted death-censored transplant failure. eGFR, recipient age, sex, serum urea and albumin levels, and declining eGFR predicted overall transplant failure. In the validation data set, 44 patients died and another 101 experienced transplant failure. The weighted scores comprising these variables showed adequate discrimination and calibration for death-censored (C statistic, 0.83; 95% CI, 0.75-0.91; Hosmer-Lemeshow χ(2)P = 0.8) and overall (C statistic, 0.70; 95% CI, 0.64-0.77; Hosmer-Lemeshow χ(2)P = 0.5) transplant failure. However, the scores failed to reclassify risk compared with eGFR alone (net reclassification improvements of 7.6% [95% CI, -0.2 to 13.4; P = 0.09] and 4.3% [95% CI, -2.7 to 11.8; P = 0.3] for death-censored and overall transplant failure, respectively). Retrospective analysis of predominantly cyclosporine-treated patients; limited study size and categorization of variables may limit power to detect effect. Although the scores performed well regarding discrimination and calibration, clinically relevant risk reclassification over eGFR alone was not evident, emphasizing the stringent requirements for such scores. Further studies are required to develop and refine this process. Copyright © 2011 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
Lei, Ting; Belykh, Evgenii; Dru, Alexander B; Yagmurlu, Kaan; Elhadi, Ali M; Nakaji, Peter; Preul, Mark C
2016-07-01
Chen Jingrun (1933-1996), perhaps the most prodigious mathematician of his time, focused on the field of analytical number theory. His work on Waring's problem, Legendre's conjecture, and Goldbach's conjecture led to progress in analytical number theory in the form of "Chen's Theorem," which he published in 1966 and 1973. His early life was ravaged by the Second Sino-Japanese War and the Chinese Cultural Revolution. On the verge of solving Goldbach's conjecture in 1984, Chen was struck by a bicyclist while also bicycling and suffered severe brain trauma. During his hospitalization, he was also found to have Parkinson's disease. Chen suffered another serious brain concussion after a fall only a few months after recovering from the bicycle crash. With significant deficits, he remained hospitalized for several years without making progress while receiving modern Western medical therapies. In 1988 traditional Chinese medicine experts were called in to assist with his treatment. After a year of acupuncture and oxygen therapy, Chen could control his basic bowel and bladder functions, he could walk slowly, and his swallowing and speech improved. When Chen was unable to produce complex work or finish his final work on Goldbach's conjecture, his mathematical pursuits were taken up vigorously by his dedicated students. He was able to publish Youth Math, a mathematics book that became an inspiration in Chinese education. Although he died in 1996 at the age of 63 after surviving brutal political repression, being deprived of neurological function at the very peak of his genius, and having to be supported by his wife, Chen ironically became a symbol of dedication, perseverance, and motivation to his students and associates, to Chinese youth, to a nation, and to mathematicians and scientists worldwide.
Invariance of separability probability over reduced states in 4 × 4 bipartite systems
NASA Astrophysics Data System (ADS)
Lovas, Attila; Andai, Attila
2017-07-01
The geometric separability probability of composite quantum systems has been extensively studied in the recent decades. One of the simplest but strikingly difficult problem is to compute the separability probability of qubit-qubit and rebit-rebit quantum states with respect to the Hilbert-Schmidt measure. A lot of numerical simulations confirm the P{rebit - rebit}=\\frac{29}{64} and P{qubit-qubit}=\\frac{8}{33} conjectured probabilities. We provide a rigorous proof for the separability probability in the real case and we give explicit integral formulas for the complex and quaternionic case. Milz and Strunz studied the separability probability with respect to given subsystems. They conjectured that the separability probability of qubit-qubit (and qubit-qutrit) states of the form of ≤ft(\\begin{array}{@{}cc@{}} D1 & C \\ C* & D2 \\end{array}\\right) depends on D=D1+D2 (on single qubit subsystems), moreover it depends only on the Bloch radii (r) of D and it is constant in r. Using the Peres-Horodecki criterion for separability we give a mathematical proof for the \\frac{29}{64} probability and we present an integral formula for the complex case which hopefully will help to prove the \\frac{8}{33} probability, too. We prove Milz and Strunz’s conjecture for rebit-rebit and qubit-qubit states. The case, when the state space is endowed with the volume form generated by the operator monotone function f(x)=\\sqrt{x} is also studied in detail. We show that even in this setting Milz and Strunz’s conjecture holds true and we give an integral formula for separability probability according to this measure.
Censorship in Children's Literature: What Every Educator Should Know.
ERIC Educational Resources Information Center
Jalongo, Mary Renck; Creany, Anne Drolett
1991-01-01
Defines censorship and differentiates censorship from selection. Reviews the history of censorship and recent research trends. Describes typical censorable content and the consequences of censorship for libraries, books, and authors. Suggests strategies educators can use in dealing with censorship. (BC)
Protecting Holden Caulfield and His Friends from the Censors.
ERIC Educational Resources Information Center
Jenkinson, Edward B.
1985-01-01
Surveys the textbook censorship picture over the past decade with particular attention to the activities of Tim LaHaye and Norma and Mel Gabler. Suggests 10 steps teachers can take to try and protect controversial texts from censorship. (RBW)
NASA Astrophysics Data System (ADS)
Olinto, Angela V.
2014-03-01
Recent activities of the Cosmic Ray Science Interest Group (CosmicSIG) of the Physics of the Cosmos PAG will be reviewed. CosmicSIG was formed to provide an assessment to NASA HQ and the PCOS program office of the status of current and future missions in the area of cosmic-ray astrophysics. CosmicSIG also strives to act as a focal point and forum for the cosmic ray community.
Calculation of Cosmic Ray Induced Single Event Upsets: Program CRUP, Cosmic Ray Upset Program
1983-09-14
1.., 0 .j ~ u M ~ t R A’- ~~ ’ .~ ; I .: ’ 1 J., ) ’- CALCULATION OF COSMIC RAY INDUCED SINGLE EVEI’o"T UPSETS: PROGRAM CRUP , COSMIC RAY UPSET...neceuety end Identity by blo..;k number) 0Thls report documents PROGR.Al\\1 CRUP , COSMIC RAY UPSET PROGRAM. The computer program calculates cosmic...34. » » •-, " 1 » V »1T"~ Calculation of Cosmic Ray Induced Single Event Upsets: PROGRAM CRUP , COSMIC RAY UPSET PROGRAM I. INTRODUCTION Since the
Impact of Cosmic-Ray Transport on Galactic Winds
NASA Astrophysics Data System (ADS)
Farber, R.; Ruszkowski, M.; Yang, H.-Y. K.; Zweibel, E. G.
2018-04-01
The role of cosmic rays generated by supernovae and young stars has very recently begun to receive significant attention in studies of galaxy formation and evolution due to the realization that cosmic rays can efficiently accelerate galactic winds. Microscopic cosmic-ray transport processes are fundamental for determining the efficiency of cosmic-ray wind driving. Previous studies modeled cosmic-ray transport either via a constant diffusion coefficient or via streaming proportional to the Alfvén speed. However, in predominantly cold, neutral gas, cosmic rays can propagate faster than in the ionized medium, and the effective transport can be substantially larger; i.e., cosmic rays can decouple from the gas. We perform three-dimensional magnetohydrodynamical simulations of patches of galactic disks including the effects of cosmic rays. Our simulations include the decoupling of cosmic rays in the cold, neutral interstellar medium. We find that, compared to the ordinary diffusive cosmic-ray transport case, accounting for the decoupling leads to significantly different wind properties, such as the gas density and temperature, significantly broader spatial distribution of cosmic rays, and higher wind speed. These results have implications for X-ray, γ-ray, and radio emission, and for the magnetization and pollution of the circumgalactic medium by cosmic rays.
Hiroshima as Politics and History.
ERIC Educational Resources Information Center
Sherwin, Martin J.
1995-01-01
Argues that the objections raised to the Enola Gay exhibit are rooted in Cold War politics. Maintains that this historical myopia exemplifies the need for challenging historical inquiry. Characterizes opposition to the exhibit as largely political and discusses demands made to censor exhibit material. (MJP)
Evaluation of Options for Interpreting Environmental ...
Report Secondary data from the BioResponse Operational Testing and Evaluation project were used to study six options for interpreting culture-based/microbial count data sets that include left censored data, or measurements that are less than established quantification limits and/or detection limits.
ERIC Educational Resources Information Center
Fernandez, Melanie
Governments, groups, and individuals have always tried to control information. This paper examines censorship, particularly textbook censorship and its effect upon the curriculum, and opposes the recent trend to censor textbooks in public schools. Since the mission of public schooling involves indoctrination and socialization as much as education,…
Do Kids Need Government Censors?
ERIC Educational Resources Information Center
Rabkin, Rhoda
2002-01-01
Fashioning public policies restricting children's access to entertainment glamorizing violence, sex, drugs, and vulgarity is a complex task. The recently introduced Media Marketing Accountability Act would empower the federal government to regulate advertising of entertainment products to youth. Suggests that this power is undesirable compared to…
Protecting the Children: Huckleberry Finn, E.T. and the Politics of Censorship.
ERIC Educational Resources Information Center
Magistrale, Anthony
1984-01-01
Explicates core aspects of two censored narratives: the movie "E.T.: The Extraterrestrial" and the novel "Huckleberry Finn." Points out similarities between the two works and raises the issue of the estrangement of youth from adult society. (RH)
Prolongation structures of nonlinear evolution equations. II
NASA Technical Reports Server (NTRS)
Estabrook, F. B.; Wahlquist, H. D.
1976-01-01
The prolongation structure of a closed ideal of exterior differential forms is further discussed, and its use illustrated by application to an ideal (in six dimensions) representing the cubically nonlinear Schroedinger equation. The prolongation structure in this case is explicitly given, and recurrence relations derived which support the conjecture that the structure is open - i.e., does not terminate as a set of structure relations of a finite-dimensional Lie group. We introduce the use of multiple pseudopotentials to generate multiple Baecklund transformation, and derive the double Baecklund transformation. This symmetric transformation concisely expresses the (usually conjectured) theorem of permutability, which must consequently apply to all solutions irrespective of asymptotic constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warnaar, S.O.
1996-07-01
We compute the one-dimensional configuration sums of the AFB model using the fermionic techniques introduced in part I of this paper. Combined with the results of Andrews, Baxter, and Forrester, we prove polynominal identities for finitizations of the Virasoro characters {sub {chi}b, a}{sup (r-1, r)}(q) as conjectured by Melzer. In the thermodynamic limit these identities reproduce Rogers-Ramanujan-type identities for the unitary minimal Virasoro characters conjectured by the Stony Brook group. We also present a list of additional Virasoro character identities which follow from our proof of Melzer`s identities and application of Bailey`s lemma.
Computation of p -units in ray class fields of real quadratic number fields
NASA Astrophysics Data System (ADS)
Chapdelaine, Hugo
2009-12-01
Let K be a real quadratic field, let p be a prime number which is inert in K and let K_p be the completion of K at p . As part of a Ph.D. thesis, we constructed a certain p -adic invariant uin K_p^{times} , and conjectured that u is, in fact, a p -unit in a suitable narrow ray class field of K . In this paper we give numerical evidence in support of that conjecture. Our method of computation is similar to the one developed by Dasgupta and relies on partial modular symbols attached to Eisenstein series.
On critical behaviour in generalized Kadomtsev-Petviashvili equations
NASA Astrophysics Data System (ADS)
Dubrovin, B.; Grava, T.; Klein, C.
2016-10-01
An asymptotic description of the formation of dispersive shock waves in solutions to the generalized Kadomtsev-Petviashvili (KP) equation is conjectured. The asymptotic description based on a multiscales expansion is given in terms of a special solution to an ordinary differential equation of the Painlevé I hierarchy. Several examples are discussed numerically to provide strong evidence for the validity of the conjecture. The numerical study of the long time behaviour of these examples indicates persistence of dispersive shock waves in solutions to the (subcritical) KP equations, while in the supercritical KP equations a blow-up occurs after the formation of the dispersive shock waves.
On the mathematical foundations of mutually unbiased bases
NASA Astrophysics Data System (ADS)
Thas, Koen
2018-02-01
In order to describe a setting to handle Zauner's conjecture on mutually unbiased bases (MUBs) (stating that in C^d, a set of MUBs of the theoretical maximal size d + 1 exists only if d is a prime power), we pose some fundamental questions which naturally arise. Some of these questions have important consequences for the construction theory of (new) sets of maximal MUBs. Partial answers will be provided in particular cases; more specifically, we will analyze MUBs with associated operator groups that have nilpotence class 2, and consider MUBs of height 1. We will also confirm Zauner's conjecture for MUBs with associated finite nilpotent operator groups.
Two-point correlation function for Dirichlet L-functions
NASA Astrophysics Data System (ADS)
Bogomolny, E.; Keating, J. P.
2013-03-01
The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.
Quadratic Forms and Semiclassical Eigenfunction Hypothesis for Flat Tori
NASA Astrophysics Data System (ADS)
T. Sardari, Naser
2018-03-01
Let Q( X) be any integral primitive positive definite quadratic form in k variables, where {k≥4}, and discriminant D. For any integer n, we give an upper bound on the number of integral solutions of Q( X) = n in terms of n, k, and D. As a corollary, we prove a conjecture of Lester and Rudnick on the small scale equidistribution of almost all functions belonging to any orthonormal basis of a given eigenspace of the Laplacian on the flat torus {T^d} for {d≥ 5}. This conjecture is motivated by the work of Berry [2,3] on the semiclassical eigenfunction hypothesis.
Integrals of motion from quantum toroidal algebras
NASA Astrophysics Data System (ADS)
Feigin, B.; Jimbo, M.; Mukhin, E.
2017-11-01
We identify the Taylor coefficients of the transfer matrices corresponding to quantum toroidal algebras with the elliptic local and non-local integrals of motion introduced by Kojima, Shiraishi, Watanabe, and one of the authors. That allows us to prove the Litvinov conjectures on the Intermediate Long Wave model. We also discuss the ({gl_m, {gl_n) duality of XXZ models in quantum toroidal setting and the implications for the quantum KdV model. In particular, we conjecture that the spectrum of non-local integrals of motion of Bazhanov, Lukyanov, and Zamolodchikov is described by Gaudin Bethe ansatz equations associated to affine {sl}2 . Dedicated to the memory of Petr Petrovich Kulish.
Generalized clustering conditions of Jack polynomials at negative Jack parameter {alpha}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernevig, B. Andrei; Department of Physics, Princeton University, Princeton, New Jersey 08544; Haldane, F. D. M.
We present several conjectures on the behavior and clustering properties of Jack polynomials at a negative parameter {alpha}=-(k+1/r-1), with partitions that violate the (k,r,N)- admissibility rule of [Feigin et al. [Int. Math. Res. Notices 23, 1223 (2002)]. We find that the ''highest weight'' Jack polynomials of specific partitions represent the minimum degree polynomials in N variables that vanish when s distinct clusters of k+1 particles are formed, where s and k are positive integers. Explicit counting formulas are conjectured. The generalized clustering conditions are useful in a forthcoming description of fractional quantum Hall quasiparticles.
Clustering in the Three and Four Color Cyclic Particle Systems in One Dimension
NASA Astrophysics Data System (ADS)
Foxall, Eric; Lyu, Hanbaek
2018-03-01
We study the κ -color cyclic particle system on the one-dimensional integer lattice Z , first introduced by Bramson and Griffeath (Ann Prob:26-45, 1989). In that paper they show that almost surely, every site changes its color infinitely often if κ \\in {3,4} and only finitely many times if κ ≥ 5 . In addition, they conjecture that for κ \\in {3,4} the system clusters, that is, for any pair of sites x, y, with probability tending to 1 as t→ ∞, x and y have the same color at time t. Here we prove that conjecture.
Joint min-max distribution and Edwards-Anderson's order parameter of the circular 1/f-noise model
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Le Doussal, Pierre
2016-05-01
We calculate the joint min-max distribution and the Edwards-Anderson's order parameter for the circular model of 1/f-noise. Both quantities, as well as generalisations, are obtained exactly by combining the freezing-duality conjecture and Jack-polynomial techniques. Numerical checks come with significantly improved control of finite-size effects in the glassy phase, and the results convincingly validate the freezing-duality conjecture. Application to diffusive dynamics is discussed. We also provide a formula for the pre-factor ratio of the joint/marginal Carpentier-Le Doussal tail for minimum/maximum which applies to any logarithmic random energy model.
A duality principle for the multi-block entanglement entropy of free fermion systems.
Carrasco, J A; Finkel, F; González-López, A; Tempesta, P
2017-09-11
The analysis of the entanglement entropy of a subsystem of a one-dimensional quantum system is a powerful tool for unravelling its critical nature. For instance, the scaling behaviour of the entanglement entropy determines the central charge of the associated Virasoro algebra. For a free fermion system, the entanglement entropy depends essentially on two sets, namely the set A of sites of the subsystem considered and the set K of excited momentum modes. In this work we make use of a general duality principle establishing the invariance of the entanglement entropy under exchange of the sets A and K to tackle complex problems by studying their dual counterparts. The duality principle is also a key ingredient in the formulation of a novel conjecture for the asymptotic behavior of the entanglement entropy of a free fermion system in the general case in which both sets A and K consist of an arbitrary number of blocks. We have verified that this conjecture reproduces the numerical results with excellent precision for all the configurations analyzed. We have also applied the conjecture to deduce several asymptotic formulas for the mutual and r-partite information generalizing the known ones for the single block case.
Comment on ``Ratchet universality in the presence of thermal noise''
NASA Astrophysics Data System (ADS)
Quintero, Niurka R.; Alvarez-Nodarse, Renato; Cuesta, José A.
2013-12-01
A recent paper [P. J. Martínez and R. Chacón, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.87.062114 87, 062114 (2013)] presents numerical simulations on a system exhibiting directed ratchet transport of a driven overdamped Brownian particle subjected to a spatially periodic, symmetric potential. The authors claim that their simulations prove the existence of a universal waveform of the external force that optimally enhances directed transport, hence confirming the validity of a previous conjecture put forth by one of them in the limit of vanishing noise intensity. With minor corrections due to noise, the conjecture holds even in the presence of noise, according to the authors. On the basis of their results the authors claim that all previous theories, which predict a different optimal force waveform, are incorrect. In this Comment we provide sufficient numerical evidence showing that there is no such universal force waveform and that the evidence obtained by the authors otherwise is due to their particular choice of parameters. Our simulations also suggest that previous theories correctly predict the shape of the optimal waveform within their validity regime, namely, when the forcing is weak. On the contrary, the aforementioned conjecture does not hold.
NASA Astrophysics Data System (ADS)
Tankeev, S. G.
2017-12-01
We prove that Grothendieck's standard conjecture B(X) of Lefschetz type on the algebraicity of the operators \\ast and Λ of Hodge theory holds for a 4-dimensional smooth projective complex variety fibred over a smooth projective curve C provided that every degenerate fibre is a union of smooth irreducible components of multiplicity 1 with normal crossings, the standard conjecture B(X\\overlineη) holds for a generic geometric fibre X\\overlineη, there is at least one degenerate fibre X_δ and the rational cohomology rings H^\\ast(V_i,{Q}) and H^\\ast(V_i\\cap V_j,{Q}) of the irreducible components V_i of every degenerate fibre X_δ=V_1+ \\dots+ V_m are generated by classes of algebraic cycles. We obtain similar results for 3-dimensional fibred varieties with algebraic invariant cycles (defined by the smooth part π'\\colon X'\\to C' of the structure morphism π\\colon X\\to C) or with a degenerate fibre all of whose irreducible components E_i possess the property H^2(E_i,{Q})= \\operatorname{NS}(E_i)\\otimes{Z}{Q}.
Comment on "Ratchet universality in the presence of thermal noise".
Quintero, Niurka R; Alvarez-Nodarse, Renato; Cuesta, José A
2013-12-01
A recent paper [P. J. Martínez and R. Chacón, Phys. Rev. E 87, 062114 (2013)] presents numerical simulations on a system exhibiting directed ratchet transport of a driven overdamped Brownian particle subjected to a spatially periodic, symmetric potential. The authors claim that their simulations prove the existence of a universal waveform of the external force that optimally enhances directed transport, hence confirming the validity of a previous conjecture put forth by one of them in the limit of vanishing noise intensity. With minor corrections due to noise, the conjecture holds even in the presence of noise, according to the authors. On the basis of their results the authors claim that all previous theories, which predict a different optimal force waveform, are incorrect. In this Comment we provide sufficient numerical evidence showing that there is no such universal force waveform and that the evidence obtained by the authors otherwise is due to their particular choice of parameters. Our simulations also suggest that previous theories correctly predict the shape of the optimal waveform within their validity regime, namely, when the forcing is weak. On the contrary, the aforementioned conjecture does not hold.
A population-based study of hospital care costs during five years after TIA and stroke
Luengo-Fernandez, Ramon; Gray, Alastair M.; Rothwell, Peter M.
2016-01-01
Background and Purpose Few studies have evaluated long-term costs after stroke onset, with almost no cost data for TIA. We studied hospital costs during the 5 years after TIA or stroke in a population-based study. Methods Patients from a UK population-based cohort study (Oxford Vascular Study) were recruited from 2002 to 2007. Analysis was based on follow-up until 2010. Hospital resource usage was obtained from patients’ hospital records and valued using 2008/09 unit costs. As not all patients had full 5-year follow-up, we used non-parametric censoring techniques. Results Among 485 TIA and 729 stroke patients ascertained and included, mean censor-adjusted 5-year hospital costs after index stroke were $25,741 (95% CI: 23,659-27,914), with costs varying considerably by severity: $21,134 after minor stroke, $33,119 after moderate stroke, and $28,552 after severe stroke. For the 239 surviving stroke patients who had reached final follow-up, mean costs were $24,383 (20,156-28,595), with over half of costs ($12,972) being incurred in the first year after the event. After index TIA, the mean censor-adjusted 5-year costs were $18,091 (15,947-20,258). A multivariate analysis showed that event severity, recurrent stroke and coronary events after the index event were independent predictors of 5-year costs. Differences by stroke subtype were mostly explained by stroke severity and subsequent events. Conclusions Long-term hospital costs after TIA and stroke are considerable, but are mainly incurred over the first year after the index event. Event severity and suffering subsequent stroke and coronary events after the index event accounted for much of the increase in costs. PMID:23160884
Virlogeux, Victor; Li, Ming; Tsang, Tim K; Feng, Luzhao; Fang, Vicky J; Jiang, Hui; Wu, Peng; Zheng, Jiandong; Lau, Eric H Y; Cao, Yu; Qin, Ying; Liao, Qiaohong; Yu, Hongjie; Cowling, Benjamin J
2015-10-15
A novel avian influenza virus, influenza A(H7N9), emerged in China in early 2013 and caused severe disease in humans, with infections occurring most frequently after recent exposure to live poultry. The distribution of A(H7N9) incubation periods is of interest to epidemiologists and public health officials, but estimation of the distribution is complicated by interval censoring of exposures. Imputation of the midpoint of intervals was used in some early studies, resulting in estimated mean incubation times of approximately 5 days. In this study, we estimated the incubation period distribution of human influenza A(H7N9) infections using exposure data available for 229 patients with laboratory-confirmed A(H7N9) infection from mainland China. A nonparametric model (Turnbull) and several parametric models accounting for the interval censoring in some exposures were fitted to the data. For the best-fitting parametric model (Weibull), the mean incubation period was 3.4 days (95% confidence interval: 3.0, 3.7) and the variance was 2.9 days; results were very similar for the nonparametric Turnbull estimate. Under the Weibull model, the 95th percentile of the incubation period distribution was 6.5 days (95% confidence interval: 5.9, 7.1). The midpoint approximation for interval-censored exposures led to overestimation of the mean incubation period. Public health observation of potentially exposed persons for 7 days after exposure would be appropriate. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.
Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I
2018-06-26
The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.
Wang, Yuanjia; Chen, Tianle; Zeng, Donglin
2016-01-01
Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects.
Kubo, Yumi; Sterling, Lulu Ren; Parfrey, Patrick S; Gill, Karminder; Mahaffey, Kenneth W; Gioni, Ioanna; Trotman, Marie-Louise; Dehmel, Bastian; Chertow, Glenn M
2015-01-01
Intention-to-treat (ITT) analysis is widely used to establish efficacy in randomized clinical trials. However, in a long-term outcomes study where non-adherence to study drug is substantial, the on-treatment effect of the study drug may be underestimated using the ITT analysis. The analyses presented herein are from the EVOLVE trial, a double-blind, placebo-controlled, event-driven cardiovascular outcomes study conducted to assess whether a treatment regimen including cinacalcet compared with placebo in addition to other conventional therapies reduces the risk of mortality and major cardiovascular events in patients receiving hemodialysis with secondary hyperparathyroidism. Pre-specified sensitivity analyses were performed to assess the impact of non-adherence on the estimated effect of cinacalcet. These analyses included lag-censoring, inverse probability of censoring weights (IPCW), rank preserving structural failure time model (RPSFTM) and iterative parameter estimation (IPE). The relative hazard (cinacalcet versus placebo) of mortality and major cardiovascular events was 0.93 (95% confidence interval 0.85, 1.02) using the ITT analysis; 0.85 (0.76, 0.95) using lag-censoring analysis; 0.81 (0.70, 0.92) using IPCW; 0.85 (0.66, 1.04) using RPSFTM and 0.85 (0.75, 0.96) using IPE. These analyses, while not providing definitive evidence, suggest that the intervention may have an effect while subjects are receiving treatment. The ITT method remains the established method to evaluate efficacy of a new treatment; however, additional analyses should be considered to assess the on-treatment effect when substantial non-adherence to study drug is expected or observed. Copyright © 2015 John Wiley & Sons, Ltd.
Dou, Z; Chen, J; Jiang, Z; Song, W L; Xu, J; Wu, Z Y
2017-11-10
Objective: To understand the distribution of population viral load (PVL) data in HIV infected men who have sex with men (MSM), fit distribution function and explore the appropriate estimating parameter of PVL. Methods: The detection limit of viral load (VL) was ≤ 50 copies/ml. Box-Cox transformation and normal distribution tests were used to describe the general distribution characteristics of the original and transformed data of PVL, then the stable distribution function was fitted with test of goodness of fit. Results: The original PVL data fitted a skewed distribution with the variation coefficient of 622.24%, and had a multimodal distribution after Box-Cox transformation with optimal parameter ( λ ) of-0.11. The distribution of PVL data over the detection limit was skewed and heavy tailed when transformed by Box-Cox with optimal λ =0. By fitting the distribution function of the transformed data over the detection limit, it matched the stable distribution (SD) function ( α =1.70, β =-1.00, γ =0.78, δ =4.03). Conclusions: The original PVL data had some censored data below the detection limit, and the data over the detection limit had abnormal distribution with large degree of variation. When proportion of the censored data was large, it was inappropriate to use half-value of detection limit to replace the censored ones. The log-transformed data over the detection limit fitted the SD. The median ( M ) and inter-quartile ranger ( IQR ) of log-transformed data can be used to describe the centralized tendency and dispersion tendency of the data over the detection limit.
NASA Astrophysics Data System (ADS)
Kong, Jing
This thesis includes 4 pieces of work. In Chapter 1, we present the work with a method for examining mortality as it is seen to run in families, and lifestyle factors that are also seen to run in families, in a subpopulation of the Beaver Dam Eye Study that has died by 2011. We find significant distance correlations between death ages, lifestyle factors, and family relationships. Considering only sib pairs compared to unrelated persons, distance correlation between siblings and mortality is, not surprisingly, stronger than that between more distantly related family members and mortality. Chapter 2 introduces a feature screening procedure with the use of distance correlation and covariance. We demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure based on distance correlation as a stopping criterion. The approach is further implemented to two real examples, namely the famous small round blue cell tumors data and the Cancer Genome Atlas ovarian cancer data Chapter 3 pays attention to the right censored human longevity data and the estimation of lifetime expectancy. We propose a general framework of backward multiple imputation for estimating the conditional lifetime expectancy function and the variance of the estimator in the right censoring setting and prove the properties of the estimator. In addition, we apply the method to the Beaver Dam eye study data to study human longevity, where the expected human lifetime are modeled with smoothing spline ANOVA based on the covariates including baseline age, gender, lifestyle factors and disease variables. Chapter 4 compares two imputation methods for right censored data, namely the famous Buckley-James estimator and the backward imputation method proposed in Chapter 3 and shows that backward imputation method is less biased and more robust with heterogeneity.
Nishiura, Hiroshi; Inaba, Hisashi
2011-03-07
Empirical estimates of the incubation period of influenza A (H1N1-2009) have been limited. We estimated the incubation period among confirmed imported cases who traveled to Japan from Hawaii during the early phase of the 2009 pandemic (n=72). We addressed censoring and employed an infection-age structured argument to explicitly model the daily frequency of illness onset after departure. We assumed uniform and exponential distributions for the frequency of exposure in Hawaii, and the hazard rate of infection for the latter assumption was retrieved, in Hawaii, from local outbreak data. The maximum likelihood estimates of the median incubation period range from 1.43 to 1.64 days according to different modeling assumptions, consistent with a published estimate based on a New York school outbreak. The likelihood values of the different modeling assumptions do not differ greatly from each other, although models with the exponential assumption yield slightly shorter incubation periods than those with the uniform exposure assumption. Differences between our proposed approach and a published method for doubly interval-censored analysis highlight the importance of accounting for the dependence of the frequency of exposure on the survival function of incubating individuals among imported cases. A truncation of the density function of the incubation period due to an absence of illness onset during the exposure period also needs to be considered. When the data generating process is similar to that among imported cases, and when the incubation period is close to or shorter than the length of exposure, accounting for these aspects is critical for long exposure times. Copyright © 2010 Elsevier Ltd. All rights reserved.
Early statin use is an independent predictor of long-term graft survival.
Moreso, Francesc; Calvo, Natividad; Pascual, Julio; Anaya, Fernando; Jiménez, Carlos; Del Castillo, Domingo; Sánchez-Plumed, Jaime; Serón, Daniel
2010-06-01
Background. Statin use in renal transplantation has been associated with a lower risk of patient death but not with an improvement of graft functional survival. The aim of this study is to evaluate the effect of statin use in graft survival, death-censored graft survival and patient survival using the data recorded on the Spanish Late Allograft Dysfunction Study Group.Patients and methods. Patients receiving a renal allograft in Spain in 1990, 1994, 1998 and 2002 were considered. Since the mean follow-up in the 2002 cohort was 3 years, statin use was analysed considering its introduction during the first year or during the initial 2 years after transplantation. Univariate and multivariate Cox regression analyses with a propensity score for statin use were employed to analyse graft survival, death-censored graft survival and patient survival.Results. In the 4682 evaluated patients, the early statin use after transplantation significantly increased from 1990 to 2002 (12.7%, 27.9%, 47.7% and 53.0%, P < 0.001). Statin use during the first year was not associated with graft or patient survival. Statin use during the initial 2 years was associated with a lower risk of graft failure (relative risk [RR] = 0.741 and 95% confidence interval [CI] = 0.635-0.866, P < 0.001) and patient death (RR = 0.806 and 95% CI = 0.656-0.989, P = 0.039). Death-censored graft survival was not associated with statin use during the initial 2 years.Conclusion. The early introduction of statin treatment after transplantation is associated with a significant decrease in late graft failure due to a risk reduction in patient death.
Early statin use is an independent predictor of long-term graft survival
Moreso, Francesc; Calvo, Natividad; Pascual, Julio; Anaya, Fernando; Jiménez, Carlos; del Castillo, Domingo; Sánchez-Plumed, Jaime; Serón, Daniel
2010-01-01
Background. Statin use in renal transplantation has been associated with a lower risk of patient death but not with an improvement of graft functional survival. The aim of this study is to evaluate the effect of statin use in graft survival, death-censored graft survival and patient survival using the data recorded on the Spanish Late Allograft Dysfunction Study Group. Patients and methods. Patients receiving a renal allograft in Spain in 1990, 1994, 1998 and 2002 were considered. Since the mean follow-up in the 2002 cohort was 3 years, statin use was analysed considering its introduction during the first year or during the initial 2 years after transplantation. Univariate and multivariate Cox regression analyses with a propensity score for statin use were employed to analyse graft survival, death-censored graft survival and patient survival. Results. In the 4682 evaluated patients, the early statin use after transplantation significantly increased from 1990 to 2002 (12.7%, 27.9%, 47.7% and 53.0%, P < 0.001). Statin use during the first year was not associated with graft or patient survival. Statin use during the initial 2 years was associated with a lower risk of graft failure (relative risk [RR] = 0.741 and 95% confidence interval [CI] = 0.635–0.866, P < 0.001) and patient death (RR = 0.806 and 95% CI = 0.656–0.989, P = 0.039). Death-censored graft survival was not associated with statin use during the initial 2 years. Conclusion. The early introduction of statin treatment after transplantation is associated with a significant decrease in late graft failure due to a risk reduction in patient death. PMID:20508861
Sun, Wanjie; Larsen, Michael D; Lachin, John M
2014-04-15
In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.
Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A
2017-09-30
For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
Katki, Hormuzd A.; Cheung, Li C.; Fetterman, Barbara; Castle, Philip E.; Sundaram, Rajeshwari
2014-01-01
Summary New cervical cancer screening guidelines in the US and many European countries recommend that women get tested for human papillomavirus (HPV). To inform decisions about screening intervals, we calculate the increase in precancer/cancer risk per year of continued HPV infection. However, both time to onset of precancer/cancer and time to HPV clearance are interval-censored, and onset of precancer/cancer strongly informatively censors HPV clearance. We analyze this bivariate informatively interval-censored data by developing a novel joint model for time to clearance of HPV and time to precancer/cancer using shared random-effects, where the estimated mean duration of each woman’s HPV infection is a covariate in the submodel for time to precancer/cancer. The model was fit to data on 9,553 HPV-positive/Pap-negative women undergoing cervical cancer screening at Kaiser Permanente Northern California, data that were pivotal to the development of US screening guidelines. We compare the implications for screening intervals of this joint model to those from population-average marginal models of precancer/cancer risk. In particular, after 2 years the marginal population-average precancer/cancer risk was 5%, suggesting a 2-year interval to control population-average risk at 5%. In contrast, the joint model reveals that almost all women exceeding 5% individual risk in 2 years also exceeded 5% in 1 year, suggesting that a 1-year interval is better to control individual risk at 5%. The example suggests that sophisticated risk models capable of predicting individual risk may have different implications than population-average risk models that are currently used for informing medical guideline development. PMID:26556961
Katki, Hormuzd A; Cheung, Li C; Fetterman, Barbara; Castle, Philip E; Sundaram, Rajeshwari
2015-10-01
New cervical cancer screening guidelines in the US and many European countries recommend that women get tested for human papillomavirus (HPV). To inform decisions about screening intervals, we calculate the increase in precancer/cancer risk per year of continued HPV infection. However, both time to onset of precancer/cancer and time to HPV clearance are interval-censored, and onset of precancer/cancer strongly informatively censors HPV clearance. We analyze this bivariate informatively interval-censored data by developing a novel joint model for time to clearance of HPV and time to precancer/cancer using shared random-effects, where the estimated mean duration of each woman's HPV infection is a covariate in the submodel for time to precancer/cancer. The model was fit to data on 9,553 HPV-positive/Pap-negative women undergoing cervical cancer screening at Kaiser Permanente Northern California, data that were pivotal to the development of US screening guidelines. We compare the implications for screening intervals of this joint model to those from population-average marginal models of precancer/cancer risk. In particular, after 2 years the marginal population-average precancer/cancer risk was 5%, suggesting a 2-year interval to control population-average risk at 5%. In contrast, the joint model reveals that almost all women exceeding 5% individual risk in 2 years also exceeded 5% in 1 year, suggesting that a 1-year interval is better to control individual risk at 5%. The example suggests that sophisticated risk models capable of predicting individual risk may have different implications than population-average risk models that are currently used for informing medical guideline development.
The microphysics and macrophysics of cosmic rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zweibel, Ellen G.
2013-05-15
This review paper commemorates a century of cosmic ray research, with emphasis on the plasma physics aspects. Cosmic rays comprise only ∼10{sup −9} of interstellar particles by number, but collectively their energy density is about equal to that of the thermal particles. They are confined by the Galactic magnetic field and well scattered by small scale magnetic fluctuations, which couple them to the local rest frame of the thermal fluid. Scattering isotropizes the cosmic rays and allows them to exchange momentum and energy with the background medium. I will review a theory for how the fluctuations which scatter the cosmicmore » rays can be generated by the cosmic rays themselves through a microinstability excited by their streaming. A quasilinear treatment of the cosmic ray–wave interaction then leads to a fluid model of cosmic rays with both advection and diffusion by the background medium and momentum and energy deposition by the cosmic rays. This fluid model admits cosmic ray modified shocks, large scale cosmic ray driven instabilities, cosmic ray heating of the thermal gas, and cosmic ray driven galactic winds. If the fluctuations were extrinsic turbulence driven by some other mechanism, the cosmic ray background coupling would be entirely different. Which picture holds depends largely on the nature of turbulence in the background medium.« less
Calculation of cosmic ray induced single event upsets: Program CRUP (Cosmic Ray Upset Program)
NASA Astrophysics Data System (ADS)
Shapiro, P.
1983-09-01
This report documents PROGRAM CRUP, COSMIC RAY UPSET PROGRAM. The computer program calculates cosmic ray induced single-event error rates in microelectronic circuits exposed to several representative cosmic-ray environments.
NASA Technical Reports Server (NTRS)
Eichler, D.
1986-01-01
Data related to the development of cosmic rays are discussed. The relationship between cosmic ray production and the steady-state Boltzmann equation is analyzed. The importance of the power-law spectrum, the scattering rate, the theory of shock acceleration, anisotropic instabilities, and cosmic ray diffusion in the formation of cosmic rays is described. It is noted that spacecraft observations at the earth's bow shock are useful for studying cosmic rays and that the data support the collisionless shock-wave theory of cosmic ray origin.
METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS
Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such ...
ERIC Educational Resources Information Center
Karolides, Nicholas J., Ed.; Burress, Lee, Ed.
Intended to provide rationales for the use of a group of controversial books in American public schools, this manual explains the educational values of 33 of the most frequently challenged books. Detailed rationales include such information as historical perspective, literary elements, pertinence, and thematic content. Some of the titles include…
The Censored Curriculum: The Problem with Textbooks Today.
ERIC Educational Resources Information Center
Ornstein, Allan C.
1992-01-01
All major textbook companies conform to preferences of larger educational markets (California, Illinois, New York, Texas, and Florida) and exercise self-censorship to appease dissenting factions and avoid alienating pressure groups. Recent censorship controversies have involved sanctity of family, criticism of free enterprise system,…
Supreme Court Deals Blow to Student Journalists.
ERIC Educational Resources Information Center
Gynn, Ann
1989-01-01
Covers the U.S. Supreme Court decision in Hazelwood School District v. Kuhlmeier, which gave principals the right to censor school publications. In "One Student's Pursuit of Journalism," Alexandra Salas relates one student journalist's experience, including internships, from high school through the end of college. (LS)
LOCAL EM ESTIMATION OF THE HAZARD FUNCTION FOR INTERVAL CENSORED DATA. (R824757)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
Analysis of longitudinal marginal structural models.
Bryan, Jenny; Yu, Zhuo; Van Der Laan, Mark J
2004-07-01
In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.
Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.