NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
NASA Astrophysics Data System (ADS)
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
NASA Technical Reports Server (NTRS)
Ganga, Ken; Page, Lyman; Cheng, Edward; Meyer, Stephan
1994-01-01
In many cosmological models, the large angular scale anisotropy in the cosmic microwave background is parameterized by a spectral index, n, and a quadrupolar amplitude, Q. For a Harrison-Peebles-Zel'dovich spectrum, n = 1. Using data from the Far Infrared Survey (FIRS) and a new statistical measure, a contour plot of the likelihood for cosmological models for which -1 less than n less than 3 and 0 equal to or less than Q equal to or less than 50 micro K is obtained. Depending upon the details of the analysis, the maximum likelihood occurs at n between 0.8 and 1.4 and Q between 18 and 21 micro K. Regardless of Q, the likelihood is always less than half its maximum for n less than -0.4 and for n greater than 2.2, as it is for Q less than 8 micro K and Q greater than 44 micro K.
NASA Technical Reports Server (NTRS)
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
Stamatakis, Alexandros
2006-11-01
RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak
Ackermann, M.; Ajello, M.; Atwood, W. B.; ...
2012-04-09
The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Our observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. In ordermore » to assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. Here, we provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Brandt, T. J.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Cavazzuti, E.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Cutini, S.; de Angelis, A.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Falletti, L.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Focke, W. B.; Fortin, P.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gaggero, D.; Gargano, F.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grove, J. E.; Guiriec, S.; Gustafsson, M.; Hadasch, D.; Hanabata, Y.; Harding, A. K.; Hayashida, M.; Hays, E.; Horan, D.; Hou, X.; Hughes, R. E.; Jóhannesson, G.; Johnson, A. S.; Johnson, R. P.; Kamae, T.; Katagiri, H.; Kataoka, J.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Lee, S.-H.; Lemoine-Goumard, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Naumann-Godo, M.; Norris, J. P.; Nuss, E.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Ormes, J. F.; Paneque, D.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Pierbattista, M.; Piron, F.; Pivato, G.; Porter, T. A.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Sadrozinski, H. F.-W.; Sgrò, C.; Siskind, E. J.; Spandre, G.; Spinelli, P.; Strong, A. W.; Suson, D. J.; Takahashi, H.; Tanaka, T.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tinivella, M.; Torres, D. F.; Tosti, G.; Troja, E.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; Waite, A. P.; Wang, P.; Winer, B. L.; Wood, K. S.; Wood, M.; Yang, Z.; Ziegler, M.; Zimmer, S.
2012-05-01
The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. To assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. We also provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Ajello, M.; Bechtol, K.
The {gamma}-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse {gamma}-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. To assess uncertaintiesmore » associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X{sub CO} factor, the ratio between integrated CO-line intensity and H{sub 2} column density, the fluxes and spectra of the {gamma}-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as {gamma}-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. We also provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Ajello, M.; Atwood, W. B.
The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Our observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. In ordermore » to assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. Here, we provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less
Observation of the shadowing of cosmic rays by the Moon using a deep underground detector
NASA Astrophysics Data System (ADS)
Ambrosio, M.; Antolini, R.; Aramo, C.; Auriemma, G.; Baldini, A.; Barbarino, G. C.; Barish, B. C.; Battistoni, G.; Bellotti, R.; Bemporad, C.; Bernardini, P.; Bilokon, H.; Bisi, V.; Bloise, C.; Bower, C.; Bussino, S.; Cafagna, F.; Calicchio, M.; Campana, D.; Carboni, M.; Castellano, M.; Cecchini, S.; Cei, F.; Chiarella, V.; Choudhary, B. C.; Coutu, S.; de Benedictis, L.; de Cataldo, G.; Dekhissi, H.; de Marzo, C.; de Mitri, I.; Derkaoui, J.; de Vincenzi, M.; di Credico, A.; Erriquez, O.; Favuzzi, C.; Forti, C.; Fusco, P.; Giacomelli, G.; Giannini, G.; Giglietto, N.; Giorgini, M.; Grassi, M.; Gray, L.; Grillo, A.; Guarino, F.; Guarnaccia, P.; Gustavino, C.; Habig, A.; Hanson, K.; Heinz, R.; Huang, Y.; Iarocci, E.; Katsavounidis, E.; Kearns, E.; Kim, H.; Kyriazopoulou, S.; Lamanna, E.; Lane, C.; Levin, D. S.; Lipari, P.; Longley, N. P.; Longo, M. J.; Maaroufi, F.; Mancarella, G.; Mandrioli, G.; Manzoor, S.; Neri, A. Margiotta; Marini, A.; Martello, D.; Marzari-Chiesa, A.; Mazziotta, M. N.; Mazzotta, C.; Michael, D. G.; Mikheyev, S.; Miller, L.; Monacelli, P.; Montaruli, T.; Monteno, M.; Mufson, S.; Musser, J.; Nicoló, D.; Orth, C.; Osteria, G.; Ouchrif, M.; Palamara, O.; Patera, V.; Patrizii, L.; Pazzi, R.; Peck, C. W.; Petrera, S.; Pistilli, P.; Popa, V.; Pugliese, V.; Rainò, A.; Reynoldson, J.; Ronga, F.; Rubizzo, U.; Satriano, C.; Satta, L.; Scapparone, E.; Scholberg, K.; Sciubba, A.; Serra-Lugaresi, P.; Severi, M.; Sioli, M.; Sitta, M.; Spinelli, P.; Spinetti, M.; Spurio, M.; Steinberg, R.; Stone, J. L.; Sulak, L. R.; Surdo, A.; Tarlè, G.; Togo, V.; Ugolotti, D.; Vakili, M.; Walter, C. W.; Webb, R.
1999-01-01
Using data collected by the MACRO experiment during the years 1989-1996, we show evidence for the shadow of the Moon in the underground cosmic ray flux with a significance of 3.6σ. This detection of the shadowing effect is the first by an underground detector. A maximum-likelihood analysis is used to determine that the angular resolution of the apparatus is 0.9°+/-0.3°. These results demonstrate MACRO's capabilities as a muon telescope by confirming its absolute pointing ability and quantifying its angular resolution.
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
A Bootstrap Generalization of Modified Parallel Analysis for IRT Dimensionality Assessment
ERIC Educational Resources Information Center
Finch, Holmes; Monahan, Patrick
2008-01-01
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
Parallel implementation of D-Phylo algorithm for maximum likelihood clusters.
Malik, Shamita; Sharma, Dolly; Khatri, Sunil Kumar
2017-03-01
This study explains a newly developed parallel algorithm for phylogenetic analysis of DNA sequences. The newly designed D-Phylo is a more advanced algorithm for phylogenetic analysis using maximum likelihood approach. The D-Phylo while misusing the seeking capacity of k -means keeps away from its real constraint of getting stuck at privately conserved motifs. The authors have tested the behaviour of D-Phylo on Amazon Linux Amazon Machine Image(Hardware Virtual Machine)i2.4xlarge, six central processing unit, 122 GiB memory, 8 × 800 Solid-state drive Elastic Block Store volume, high network performance up to 15 processors for several real-life datasets. Distributing the clusters evenly on all the processors provides us the capacity to accomplish a near direct speed if there should arise an occurrence of huge number of processors.
Exploiting Non-sequence Data in Dynamic Model Learning
2013-10-01
For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in
A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0
Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.
2014-01-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method
NASA Astrophysics Data System (ADS)
Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David
2004-05-01
A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.
Baxter, E. J.; Keisler, R.; Dodelson, S.; ...
2015-06-22
Clusters of galaxies are expected to gravitationally lens the cosmic microwave background (CMB) and thereby generate a distinct signal in the CMB on arcminute scales. Measurements of this effect can be used to constrain the masses of galaxy clusters with CMB data alone. Here we present a measurement of lensing of the CMB by galaxy clusters using data from the South Pole Telescope (SPT). We also develop a maximum likelihood approach to extract the CMB cluster lensing signal and validate the method on mock data. We quantify the effects on our analysis of several potential sources of systematic error andmore » find that they generally act to reduce the best-fit cluster mass. It is estimated that this bias to lower cluster mass is roughly 0.85σ in units of the statistical error bar, although this estimate should be viewed as an upper limit. Furthermore, we apply our maximum likelihood technique to 513 clusters selected via their Sunyaev–Zeldovich (SZ) signatures in SPT data, and rule out the null hypothesis of no lensing at 3.1σ. The lensing-derived mass estimate for the full cluster sample is consistent with that inferred from the SZ flux: M 200,lens = 0.83 +0.38 -0.37 M 200,SZ (68% C.L., statistical error only).« less
The effect of cosmic-ray acceleration on supernova blast wave dynamics
NASA Astrophysics Data System (ADS)
Pais, M.; Pfrommer, C.; Ehlert, K.; Pakmor, R.
2018-05-01
Non-relativistic shocks accelerate ions to highly relativistic energies provided that the orientation of the magnetic field is closely aligned with the shock normal (quasi-parallel shock configuration). In contrast, quasi-perpendicular shocks do not efficiently accelerate ions. We model this obliquity-dependent acceleration process in a spherically expanding blast wave setup with the moving-mesh code AREPO for different magnetic field morphologies, ranging from homogeneous to turbulent configurations. A Sedov-Taylor explosion in a homogeneous magnetic field generates an oblate ellipsoidal shock surface due to the slower propagating blast wave in the direction of the magnetic field. This is because of the efficient cosmic ray (CR) production in the quasi-parallel polar cap regions, which softens the equation of state and increases the compressibility of the post-shock gas. We find that the solution remains self-similar because the ellipticity of the propagating blast wave stays constant in time. This enables us to derive an effective ratio of specific heats for a composite of thermal gas and CRs as a function of the maximum acceleration efficiency. We finally discuss the behavior of supernova remnants expanding into a turbulent magnetic field with varying coherence lengths. For a maximum CR acceleration efficiency of about 15 per cent at quasi-parallel shocks (as suggested by kinetic plasma simulations), we find an average efficiency of about 5 per cent, independent of the assumed magnetic coherence length.
Simulation and study of small numbers of random events
NASA Technical Reports Server (NTRS)
Shelton, R. D.
1986-01-01
Random events were simulated by computer and subjected to various statistical methods to extract important parameters. Various forms of curve fitting were explored, such as least squares, least distance from a line, maximum likelihood. Problems considered were dead time, exponential decay, and spectrum extraction from cosmic ray data using binned data and data from individual events. Computer programs, mostly of an iterative nature, were developed to do these simulations and extractions and are partially listed as appendices. The mathematical basis for the compuer programs is given.
NASA Technical Reports Server (NTRS)
Howell, L. W.
2001-01-01
A simple power law model consisting of a single spectral index (alpha-1) is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at knee energy (E(sub k)) to a steeper spectral index alpha-2 > alpha-1 above E(sub k). The maximum likelihood procedure is developed for estimating these three spectral parameters of the broken power law energy spectrum from simulated detector responses. These estimates and their surrounding statistical uncertainty are being used to derive the requirements in energy resolution, calorimeter size, and energy response of a proposed sampling calorimeter for the Advanced Cosmic-ray Composition Experiment for the Space Station (ACCESS). This study thereby permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
A likelihood method for measuring the ultrahigh energy cosmic ray composition
NASA Astrophysics Data System (ADS)
High Resolution Fly'S Eye Collaboration; Abu-Zayyad, T.; Amman, J. F.; Archbold, G. C.; Belov, K.; Blake, S. A.; Belz, J. W.; Benzvi, S.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Connolly, B. M.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Rodriguez, D.; Sasaki, M.; Schnetzer, S.; Seman, M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2006-08-01
Air fluorescence detectors traditionally determine the dominant chemical composition of the ultrahigh energy cosmic ray flux by comparing the averaged slant depth of the shower maximum, Xmax, as a function of energy to the slant depths expected for various hypothesized primaries. In this paper, we present a method to make a direct measurement of the expected mean number of protons and iron by comparing the shapes of the expected Xmax distributions to the distribution for data. The advantages of this method includes the use of information of the full distribution and its ability to calculate a flux for various cosmic ray compositions. The same method can be expanded to marginalize uncertainties due to choice of spectra, hadronic models and atmospheric parameters. We demonstrate the technique with independent simulated data samples from a parent sample of protons and iron. We accurately predict the number of protons and iron in the parent sample and show that the uncertainties are meaningful.
NASA Technical Reports Server (NTRS)
Howell, L. W.
2001-01-01
A simple power law model consisting of a single spectral index alpha-1 is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV. Two procedures for estimating alpha-1 the method of moments and maximum likelihood (ML), are developed and their statistical performance compared. It is concluded that the ML procedure attains the most desirable statistical properties and is hence the recommended statistical estimation procedure for estimating alpha-1. The ML procedure is then generalized for application to a set of real cosmic-ray data and thereby makes this approach applicable to existing cosmic-ray data sets. Several other important results, such as the relationship between collecting power and detector energy resolution, as well as inclusion of a non-Gaussian detector response function, are presented. These results have many practical benefits in the design phase of a cosmic-ray detector as they permit instrument developers to make important trade studies in design parameters as a function of one of the science objectives. This is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.
Lehmann, A; Scheffler, Ch; Hermanussen, M
2010-02-01
Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.
Statistical reconstruction for cosmic ray muon tomography.
Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J
2007-08-01
Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.
Theoretical studies of the solar atmosphere and interstellar pickup ions
NASA Technical Reports Server (NTRS)
1994-01-01
Solar atmosphere research activities are summarized. Specific topics addressed include: (1) coronal mass ejections and related phenomena; (2) parametric instabilities of Alfven waves; (3) pickup ions in the solar wind; and (4) cosmic rays in the outer heliosphere. Also included is a list of publications covering the following topics: catastrophic evolution of a force-free flux rope; maximum energy release in flux-rope models of eruptive flares; sheet approximations in models of eruptive flares; material ejection, motions of loops and ribbons of two-ribbon flares; dispersion relations for parametric instabilities of parallel-propagating; parametric instabilities of parallel-propagating Alfven waves; beat, modulation, and decay instabilities of a circularly-polarized Alfven wave; effects of time-dependent photoionization on interstellar pickup helium; observation of waves generated by the solar wind pickup of interstellar hydrogen ions; ion thermalization and wave excitation downstream of the quasi-perpendicular bowshock; ion cyclotron instability and the inverse correlation between proton anisotrophy and proton beta; and effects of cosmic rays and interstellar gas on the dynamics of a wind.
The sun and heliosphere at solar maximum
NASA Technical Reports Server (NTRS)
Smith, E. J.; Marsden, R. G.; Balogh, A.; Gloeckler, G.; Geiss, J.; McComas, D. J.; McKibben, R. B.; MacDowall, R. J.; Lanzerotti, L. J.; Krupp, N.;
2003-01-01
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun'rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
All-particle cosmic ray energy spectrum measured by the HAWC experiment from 10 to 500 TeV
NASA Astrophysics Data System (ADS)
Alfaro, R.; Alvarez, C.; Álvarez, J. D.; Arceo, R.; Arteaga-Velázquez, J. C.; Avila Rojas, D.; Ayala Solares, H. A.; Barber, A. S.; Becerril, A.; Belmont-Moreno, E.; BenZvi, S. Y.; Brisbois, C.; Caballero-Mora, K. S.; Capistrán, T.; Carramiñana, A.; Casanova, S.; Castillo, M.; Cotti, U.; Cotzomi, J.; Coutiño de León, S.; De León, C.; De la Fuente, E.; Diaz Hernandez, R.; Dichiara, S.; Dingus, B. L.; DuVernois, M. A.; Díaz-Vélez, J. C.; Ellsworth, R. W.; Enriquez-Rivera, O.; Fiorino, D. W.; Fleischhack, H.; Fraija, N.; García-González, J. A.; González Muñoz, A.; González, M. M.; Goodman, J. A.; Hampel-Arias, Z.; Harding, J. P.; Hernandez-Almada, A.; Hinton, J.; Hueyotl-Zahuantitla, F.; Hui, C. M.; Hüntemeyer, P.; Iriarte, A.; Jardin-Blicq, A.; Joshi, V.; Kaufmann, S.; Lara, A.; Lauer, R. J.; Lennarz, D.; León Vargas, H.; Linnemann, J. T.; Longinotti, A. L.; Luis Raya, G.; Luna-García, R.; López-Cámara, D.; López-Coto, R.; Malone, K.; Marinelli, S. S.; Martinez, O.; Martinez-Castellanos, I.; Martínez-Castro, J.; Martínez-Huerta, H.; Matthews, J. A.; Miranda-Romagnoli, P.; Moreno, E.; Mostafá, M.; Nellen, L.; Newbold, M.; Nisa, M. U.; Noriega-Papaqui, R.; Pelayo, R.; Pretz, J.; Pérez-Pérez, E. G.; Ren, Z.; Rho, C. D.; Rivière, C.; Rosa-González, D.; Rosenberg, M.; Ruiz-Velasco, E.; Salesa Greus, F.; Sandoval, A.; Schneider, M.; Schoorlemmer, H.; Sinnis, G.; Smith, A. J.; Springer, R. W.; Surajbali, P.; Taboada, I.; Tibolla, O.; Tollefson, K.; Torres, I.; Ukwatta, T. N.; Villaseñor, L.; Weisgarber, T.; Westerhoff, S.; Wood, J.; Yapici, T.; Zepeda, A.; Zhou, H.; HAWC Collaboration
2017-12-01
We report on the measurement of the all-particle cosmic ray energy spectrum with the High Altitude Water Cherenkov (HAWC) Observatory in the energy range 10 to 500 TeV. HAWC is a ground-based air-shower array deployed on the slopes of Volcan Sierra Negra in the state of Puebla, Mexico, and is sensitive to gamma rays and cosmic rays at TeV energies. The data used in this work were taken over 234 days between June 2016 and February 2017. The primary cosmic-ray energy is determined with a maximum likelihood approach using the particle density as a function of distance to the shower core. Introducing quality cuts to isolate events with shower cores landing on the array, the reconstructed energy distribution is unfolded iteratively. The measured all-particle spectrum is consistent with a broken power law with an index of -2.49 ±0.01 prior to a break at (45.7 ±0.1 ) TeV , followed by an index of -2.71 ±0.01 . The spectrum also represents a single measurement that spans the energy range between direct detection and ground-based experiments. As a verification of the detector response, the energy scale and angular resolution are validated by observation of the cosmic ray Moon shadow's dependence on energy.
Measuring The cmb Polarization At 94 GHz With The QUIET Pseudo-cL Pipeline
NASA Astrophysics Data System (ADS)
Buder, Immanuel; QUIET Collaboration
2012-01-01
The Q/U Imaging ExperimenT (QUIET) aims to limit or detect cosmic microwave background (CMB) B-mode polarization from inflation. This talk is part of a 3-talk series on QUIET. The previous talk describes the QUIET science and instrument. QUIET has two parallel analysis pipelines which are part of an effort to validate the analysis and confirm the result. In this talk, I will describe the analysis methods of one of these: the pseudo-Cl pipeline. Calibration, noise modeling, filtering, and data-selection choices are made following a blind-analysis strategy. Central to this strategy is a suite of 30 null tests, each motivated by a possible instrumental problem or systematic effect. The systematic errors are also evaluated through full-season simulations in the blind stage of the analysis before the result is known. The CMB power spectra are calculated using a pseudo-Cl cross-correlation technique which suppresses contamination and makes the result insensitive to noise bias. QUIET will detect the first three peaks of the even-parity (E-mode) spectrum at high significance. I will show forecasts of the systematic errors for these results and for the upper limit on B-mode polarization. The very low systematic errors in these forecasts show that the technology is ready to be applied in a more sensitive next-generation experiment. The next and final talk in this series covers the other parallel analysis pipeline, based on maximum likelihood methods. This work was supported by NSF and the Department of Education.
NASA Astrophysics Data System (ADS)
Anchordoqui, Luis A.; Barger, Vernon; Weiler, Thomas J.
2018-03-01
We argue that if ultrahigh-energy (E ≳1010GeV) cosmic rays are heavy nuclei (as indicated by existing data), then the pointing of cosmic rays to their nearest extragalactic sources is expected for 1010.6 ≲ E /GeV ≲1011. This is because for a nucleus of charge Ze and baryon number A, the bending of the cosmic ray decreases as Z / E with rising energy, so that pointing to nearby sources becomes possible in this particular energy range. In addition, the maximum energy of acceleration capability of the sources grows linearly in Z, while the energy loss per distance traveled decreases with increasing A. Each of these two points tend to favor heavy nuclei at the highest energies. The traditional bi-dimensional analyses, which simultaneously reproduce Auger data on the spectrum and nuclear composition, may not be capable of incorporating the relative importance of all these phenomena. In this paper we propose a multi-dimensional reconstruction of the individual emission spectra (in E, direction, and cross-correlation with nearby putative sources) to study the hypothesis that primaries are heavy nuclei subject to GZK photo-disintegration, and to determine the nature of the extragalactic sources. More specifically, we propose to combine information on nuclear composition and arrival direction to associate a potential clustering of events with a 3-dimensional position in the sky. Actually, both the source distance and maximum emission energy can be obtained through a multi-parameter likelihood analysis to accommodate the observed nuclear composition of each individual event in the cluster. We show that one can track the level of GZK interactions on an statistical basis by comparing the maximum energy at the source of each cluster. We also show that nucleus-emitting-sources exhibit a cepa stratis structure on Earth which could be pealed off by future space-missions, such as POEMMA. Finally, we demonstrate that metal-rich starburst galaxies are highly-plausible candidate sources, and we use them as an explicit example of our proposed multi-dimensional analysis.
Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo
2012-01-01
In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abe, M.; Abu-Zayyad, T.; Allen, M.; Azuma, R.; Barcikowski, E.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Cady, R.; Cheon, B. G.; Chiba, J.; Chikawa, M.; di Matteo, A.; Fujii, T.; Fujita, K.; Fukushima, M.; Furlich, G.; Goto, T.; Hanlon, W.; Hayashi, M.; Hayashi, Y.; Hayashida, N.; Hibino, K.; Honda, K.; Ikeda, D.; Inoue, N.; Ishii, T.; Ishimori, R.; Ito, H.; Ivanov, D.; Jeong, H. M.; Jeong, S. M.; Jui, C. C. H.; Kadota, K.; Kakimoto, F.; Kalashev, O.; Kasahara, K.; Kawai, H.; Kawakami, S.; Kawana, S.; Kawata, K.; Kido, E.; Kim, H. B.; Kim, J. H.; Kim, J. H.; Kishigami, S.; Kitamura, S.; Kitamura, Y.; Kuzmin, V.; Kuznetsov, M.; Kwon, Y. J.; Lee, K. H.; Lubsandorzhiev, B.; Lundquist, J. P.; Machida, K.; Martens, K.; Matsuyama, T.; Matthews, J. N.; Mayta, R.; Minamino, M.; Mukai, K.; Myers, I.; Nagasawa, K.; Nagataki, S.; Nakamura, R.; Nakamura, T.; Nonaka, T.; Oda, H.; Ogio, S.; Ogura, J.; Ohnishi, M.; Ohoka, H.; Okuda, T.; Omura, Y.; Ono, M.; Onogi, R.; Oshima, A.; Ozawa, S.; Park, I. H.; Pshirkov, M. S.; Rodriguez, D. C.; Rubtsov, G.; Ryu, D.; Sagawa, H.; Sahara, R.; Saito, K.; Saito, Y.; Sakaki, N.; Sakurai, N.; Scott, L. M.; Seki, T.; Sekino, K.; Shah, P. D.; Shibata, F.; Shibata, T.; Shimodaira, H.; Shin, B. K.; Shin, H. S.; Smith, J. D.; Sokolsky, P.; Stokes, B. T.; Stratton, S. R.; Stroman, T. A.; Suzawa, T.; Takagi, Y.; Takahashi, Y.; Takamura, M.; Takeda, M.; Takeishi, R.; Taketa, A.; Takita, M.; Tameda, Y.; Tanaka, H.; Tanaka, K.; Tanaka, M.; Thomas, S. B.; Thomson, G. B.; Tinyakov, P.; Tkachev, I.; Tokuno, H.; Tomida, T.; Troitsky, S.; Tsunesada, Y.; Tsutsumi, K.; Uchihori, Y.; Udo, S.; Urban, F.; Wong, T.; Yamamoto, M.; Yamane, R.; Yamaoka, H.; Yamazaki, K.; Yang, J.; Yashiro, K.; Yoneda, Y.; Yoshida, S.; Yoshii, H.; Zhezher, Y.; Zundel, Z.; Telescope Array Collaboration
2018-05-01
The Telescope Array (TA) observatory utilizes fluorescence detectors and surface detectors (SDs) to observe air showers produced by ultra high energy cosmic rays in Earth’s atmosphere. Cosmic-ray events observed in this way are termed hybrid data. The depth of air shower maximum is related to the mass of the primary particle that generates the shower. This paper reports on shower maxima data collected over 8.5 yr using the Black Rock Mesa and Long Ridge fluorescence detectors in conjunction with the array of SDs. We compare the means and standard deviations of the observed {X}\\max distributions with Monte Carlo {X}\\max distributions of unmixed protons, helium, nitrogen, and iron, all generated using the QGSJet II-04 hadronic model. We also perform an unbinned maximum likelihood test of the observed data, which is subjected to variable systematic shifting of the data {X}\\max distributions to allow us to test the full distributions, and compare them to the Monte Carlo to see which elements are not compatible with the observed data. For all energy bins, QGSJet II-04 protons are found to be compatible with TA hybrid data at the 95% confidence level after some systematic {X}\\max shifting of the data. Three other QGSJet II-04 elements are found to be compatible using the same test procedure in an energy range limited to the highest energies where data statistics are sparse.
Task Performance with List-Mode Data
NASA Astrophysics Data System (ADS)
Caucci, Luca
This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W., Jr.
2003-01-01
A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.
A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.
Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P
2014-09-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
On the quirks of maximum parsimony and likelihood on phylogenetic networks.
Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles
2017-03-21
Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lateral and Time Distributions of Extensive Air Showers for CHICOS
NASA Astrophysics Data System (ADS)
Jillings, C. J.; Wells, D.; Chan, K. C.; Hill, J.; Falkowski, B.; Sepikas, J.
2005-04-01
We report results of a series of detailed Monte-Carlo calculations to determine the density and arrival-time distribution of charged particles in extensive air showers. We have parameterized both distributions as a function of distance from the shower axis, energy of the primary cosmic-ray proton, and incident zenith angle. Muons and electrons are parameterized separately. These parameterizations can be easily used in maximum-likelihood reconstruction of air showers. Calculations were performed for primary energies between 10^18 and 10^21eV and zenith angles out to approximately 50^o. The calculations are appropriate for the California High School Cosmic Ray Observatory: a 400 km^2 array of scintillation detectors in Los Angeles county. The average elevation of the array is approximately 250 meters above sea level. Currently 64 of 90 sites are operational. The array will be completed this year. We thank the NSF, the CURE program at the Jet Propulsion Laboratory, the SURF program at Caltech, and the Chinese University of Hong Kong.
Blazar Jet Physics in the Age of Fermi
2010-11-23
in colliding shells, and whether blazars are sources of ultra-high energy cosmic rays . Keywords. galaxies: jets, gamma rays : observations, gamma rays ...colliding shells ejected from the central supermassive black hole are made. The likelihood that blazars accelerate ultra-high energy cosmic rays is...colliding shells, and whether blazars are sources of ultra-high energy cosmic rays . 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krakau, S.; Schlickeiser, R., E-mail: steffen.krakau@rub.de, E-mail: rsch@tp4.rub.de
2016-02-20
The linear instability of an ultrarelativistic hadron beam in the unmagnetized intergalactic medium (IGM) is investigated with respect to the excitation of parallel electrostatic and electromagnetic fluctuations. This analysis is important for the propagation of extragalactic ultrarelativistic cosmic rays from their distant sources to Earth. As opposed to the previous paper, we calculate the minimum instability growth time for Lorentz-distributed cosmic rays which traverse the hot IGM. The growth times are orders of magnitude higher than the cosmic-ray propagation time in the IGM. Since the backreaction of the generated plasma fluctuations (plateauing) lasts longer than the propagation time, the cosmic-raymore » hadron beam can propagate to the Earth without losing a significant amount of energy to electrostatic turbulence.« less
Improved CDMA Performance Using Parallel Interference Cancellation
NASA Technical Reports Server (NTRS)
Simon, Marvin; Divsalar, Dariush
1995-01-01
This report considers a general parallel interference cancellation scheme that significantly reduces the degradation effect of user interference but with a lesser implementation complexity than the maximum-likelihood technique. The scheme operates on the fact that parallel processing simultaneously removes from each user the interference produced by the remaining users accessing the channel in an amount proportional to their reliability. The parallel processing can be done in multiple stages. The proposed scheme uses tentative decision devices with different optimum thresholds at the multiple stages to produce the most reliably received data for generation and cancellation of user interference. The 1-stage interference cancellation is analyzed for three types of tentative decision devices, namely, hard, null zone, and soft decision, and two types of user power distribution, namely, equal and unequal powers. Simulation results are given for a multitude of different situations, in particular, those cases for which the analysis is too complex.
NASA Astrophysics Data System (ADS)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
NASA Astrophysics Data System (ADS)
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data
NASA Astrophysics Data System (ADS)
Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.
2009-11-01
A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm
Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.
2010-01-01
A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hao; Mey, Antonia S. J. S.; Noé, Frank
2014-12-07
We propose a discrete transition-based reweighting analysis method (dTRAM) for analyzing configuration-space-discretized simulation trajectories produced at different thermodynamic states (temperatures, Hamiltonians, etc.) dTRAM provides maximum-likelihood estimates of stationary quantities (probabilities, free energies, expectation values) at any thermodynamic state. In contrast to the weighted histogram analysis method (WHAM), dTRAM does not require data to be sampled from global equilibrium, and can thus produce superior estimates for enhanced sampling data such as parallel/simulated tempering, replica exchange, umbrella sampling, or metadynamics. In addition, dTRAM provides optimal estimates of Markov state models (MSMs) from the discretized state-space trajectories at all thermodynamic states. Under suitablemore » conditions, these MSMs can be used to calculate kinetic quantities (e.g., rates, timescales). In the limit of a single thermodynamic state, dTRAM estimates a maximum likelihood reversible MSM, while in the limit of uncorrelated sampling data, dTRAM is identical to WHAM. dTRAM is thus a generalization to both estimators.« less
Comparison of the MPP with other supercomputers for LANDSAT data processing
NASA Technical Reports Server (NTRS)
Ozga, Martin
1987-01-01
The massively parallel processor is compared to the CRAY X-MP and the CYBER-205 for LANDSAT data processing. The maximum likelihood classification algorithm is the basis for comparison since this algorithm is simple to implement and vectorizes very well. The algorithm was implemented on all three machines and tested by classifying the same full scene of LANDSAT multispectral scan data. Timings are compared as well as features of the machines and available software.
NASA Technical Reports Server (NTRS)
Switzer, Eric Ryan; Watts, Duncan J.
2016-01-01
The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.
A New Numerical Scheme for Cosmic-Ray Transport
NASA Astrophysics Data System (ADS)
Jiang, Yan-Fei; Oh, S. Peng
2018-02-01
Numerical solutions of the cosmic-ray (CR) magnetohydrodynamic equations are dogged by a powerful numerical instability, which arises from the constraint that CRs can only stream down their gradient. The standard cure is to regularize by adding artificial diffusion. Besides introducing ad hoc smoothing, this has a significant negative impact on either computational cost or complexity and parallel scalings. We describe a new numerical algorithm for CR transport, with close parallels to two-moment methods for radiative transfer under the reduced speed of light approximation. It stably and robustly handles CR streaming without any artificial diffusion. It allows for both isotropic and field-aligned CR streaming and diffusion, with arbitrary streaming and diffusion coefficients. CR transport is handled explicitly, while source terms are handled implicitly. The overall time step scales linearly with resolution (even when computing CR diffusion) and has a perfect parallel scaling. It is given by the standard Courant condition with respect to a constant maximum velocity over the entire simulation domain. The computational cost is comparable to that of solving the ideal MHD equation. We demonstrate the accuracy and stability of this new scheme with a wide variety of tests, including anisotropic streaming and diffusion tests, CR-modified shocks, CR-driven blast waves, and CR transport in multiphase media. The new algorithm opens doors to much more ambitious and hitherto intractable calculations of CR physics in galaxies and galaxy clusters. It can also be applied to other physical processes with similar mathematical structure, such as saturated, anisotropic heat conduction.
NASA Astrophysics Data System (ADS)
Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu
2018-06-01
Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M.G.; et al.
2015-11-06
We have conducted three searches for correlations between ultra-high energy cosmic rays detected by the Telescope Array and the Pierre Auger Observatory, and high-energy neutrino candidate events from IceCube. Two cross-correlation analyses with UHECRs are done: one with 39 cascades from the IceCube `high-energy starting events' sample and the other with 16 high-energy `track events'. The angular separation between the arrival directions of neutrinos and UHECRs is scanned over. The same events are also used in a separate search using a maximum likelihood approach, after the neutrino arrival directions are stacked. To estimate the significance we assume UHECR magnetic deflections to be inversely proportional to their energy, with valuesmore » $$3^\\circ$$, $$6^\\circ$$ and $$9^\\circ$$ at 100 EeV to allow for the uncertainties on the magnetic field strength and UHECR charge. A similar analysis is performed on stacked UHECR arrival directions and the IceCube sample of through-going muon track events which were optimized for neutrino point-source searches.« less
Deciphering the Local Interstellar Spectra of Primary Cosmic-Ray Species with HELMOD
NASA Astrophysics Data System (ADS)
Boschini, M. J.; Della Torre, S.; Gervasi, M.; Grandi, D.; Jóhannesson, G.; La Vacca, G.; Masi, N.; Moskalenko, I. V.; Pensotti, S.; Porter, T. A.; Quadrani, L.; Rancoita, P. G.; Rozza, D.; Tacconi, M.
2018-05-01
Local interstellar spectra (LIS) of primary cosmic ray (CR) nuclei, such as helium, oxygen, and mostly primary carbon are derived for the rigidity range from 10 MV to ∼200 TV using the most recent experimental results combined with the state-of-the-art models for CR propagation in the Galaxy and in the heliosphere. Two propagation packages, GALPROP and HELMOD, are combined into a single framework that is used to reproduce direct measurements of CR species at different modulation levels, and at both polarities of the solar magnetic field. The developed iterative maximum-likelihood method uses GALPROP-predicted LIS as input to HELMOD, which provides the modulated spectra for specific time periods of the selected experiments for model–data comparison. The interstellar and heliospheric propagation parameters derived in this study are consistent with our prior analyses using the same methodology for propagation of CR protons, helium, antiprotons, and electrons. The resulting LIS accommodate a variety of measurements made in the local interstellar space (Voyager 1) and deep inside the heliosphere at low (ACE/CRIS, HEAO-3) and high energies (PAMELA, AMS-02).
Diffusive shock acceleration at non-relativistic highly oblique shocks
NASA Astrophysics Data System (ADS)
Meli, Athina; Biermann, P. L.
2004-10-01
Our aim here is to evaluate the rate of the maximum energy and the acceleration rate that Cosmic Rays (CRs) acquire in the non-relativistic diffusive shock acceleration as it could apply during their lifetime in various astrophysical sites. We examine numerically (using Monte Carlo simulations) the effect of the diffusion coefficients on the energy gain and the acceleration rate, by testing the role between the obliquity of the magnetic field at the shock normal, and the significance of both perpendicular cross-field diffusion and parallel diffusion coefficients to the aceleration rate. We find (and justify previous analytical work -Jokipii 1987) that in highly oblique shocks the smaller the perpendicular diffusion gets compared to the parallel diffusion coefficient values, the greater the energy gain of the CRs to be obtained. An explanation of the Cosmic Ray Spectrum in High Energies, between 1015 and 1018eV is claimed, as we estimate the upper limit of energy that CRs could gain in plausible astrophysical regimes; interpreted by the scenario of CRs which are injected by three different kind of sources, (i) supernovae (SN) which explode into the interstellar medium (ISM), (ii) Red Supergiants (RSG), and (iii) Wolf-Rayet stars (WR), where the two latter explode into their pre-SN winds Biermann (2001); Sina (2001).
Wind Observations of Anomalous Cosmic Rays from Solar Minimum to Maximum
NASA Technical Reports Server (NTRS)
Reames, D. V.; McDonald, F. B.
2003-01-01
We report the first observation near Earth of the time behavior of anomalous cosmic-ray N, O, and Ne ions through the period surrounding the maximum of the solar cycle. These observations were made by the Wind spacecraft during the 1995-2002 period spanning times from solar minimum through solar maximum. Comparison of anomalous and galactic cosmic rays provides a powerful tool for the study of the physics of solar modulation throughout the solar cycle.
Geological mapping in northwestern Saudi Arabia using LANDSAT multispectral techniques
NASA Technical Reports Server (NTRS)
Blodget, H. W.; Brown, G. F.; Moik, J. G.
1975-01-01
Various computer enhancement and data extraction systems using LANDSAT data were assessed and used to complement a continuing geologic mapping program. Interactive digital classification techniques using both the parallel-piped and maximum-likelihood statistical approaches achieve very limited success in areas of highly dissected terrain. Computer enhanced imagery developed by color compositing stretched MSS ratio data was constructed for a test site in northwestern Saudi Arabia. Initial results indicate that several igneous and sedimentary rock types can be discriminated.
Optimal estimates of free energies from multistate nonequilibrium work data.
Maragakis, Paul; Spichty, Martin; Karplus, Martin
2006-03-17
We derive the optimal estimates of the free energies of an arbitrary number of thermodynamic states from nonequilibrium work measurements; the work data are collected from forward and reverse switching processes and obey a fluctuation theorem. The maximum likelihood formulation properly reweights all pathways contributing to a free energy difference and is directly applicable to simulations and experiments. We demonstrate dramatic gains in efficiency by combining the analysis with parallel tempering simulations for alchemical mutations of model amino acids.
Robust statistical reconstruction for charged particle tomography
Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W
2013-10-08
Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.
A maximum likelihood method for high resolution proton radiography/proton CT
NASA Astrophysics Data System (ADS)
Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K. N.; Beaulieu, Luc; Seco, Joao
2016-12-01
Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography’s spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm-1 to 4.53 lp cm-1 in the 200 MeV beam and from 3.49 lp cm-1 to 5.76 lp cm-1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm-1 to 5.76 lp cm-1) or conical beam (from 3.49 lp cm-1 to 5.56 lp cm-1). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm-1 for the parallel beam and from 3.03 to 5.15 lp cm-1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65 % ) in proton radiography and greatly accelerate proton computed tomography reconstruction.
SU-C-207A-01: A Novel Maximum Likelihood Method for High-Resolution Proton Radiography/proton CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins-Fekete, C; Centre Hospitalier University de Quebec, Quebec, QC; Mass General Hospital
2016-06-15
Purpose: Multiple Coulomb scattering is the largest contributor to blurring in proton imaging. Here we tested a maximum likelihood least squares estimator (MLLSE) to improve the spatial resolution of proton radiography (pRad) and proton computed tomography (pCT). Methods: The object is discretized into voxels and the average relative stopping power through voxel columns defined from the source to the detector pixels is optimized such that it maximizes the likelihood of the proton energy loss. The length spent by individual protons in each column is calculated through an optimized cubic spline estimate. pRad images were first produced using Geant4 simulations. Anmore » anthropomorphic head phantom and the Catphan line-pair module for 3-D spatial resolution were studied and resulting images were analyzed. Both parallel and conical beam have been investigated for simulated pRad acquisition. Then, experimental data of a pediatric head phantom (CIRS) were acquired using a recently completed experimental pCT scanner. Specific filters were applied on proton angle and energy loss data to remove proton histories that underwent nuclear interactions. The MTF10% (lp/mm) was used to evaluate and compare spatial resolution. Results: Numerical simulations showed improvement in the pRad spatial resolution for the parallel (2.75 to 6.71 lp/cm) and conical beam (3.08 to 5.83 lp/cm) reconstructed with the MLLSE compared to averaging detector pixel signals. For full tomographic reconstruction, the improved pRad were used as input into a simultaneous algebraic reconstruction algorithm. The Catphan pCT reconstruction based on the MLLSE-enhanced projection showed spatial resolution improvement for the parallel (2.83 to 5.86 lp/cm) and conical beam (3.03 to 5.15 lp/cm). The anthropomorphic head pCT displayed important contrast gains in high-gradient regions. Experimental results also demonstrated significant improvement in spatial resolution of the pediatric head radiography. Conclusion: The proposed MLLSE shows promising potential to increase the spatial resolution (up to 244%) in proton imaging.« less
A maximum likelihood method for high resolution proton radiography/proton CT.
Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao
2016-12-07
Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography's spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm -1 to 4.53 lp cm -1 in the 200 MeV beam and from 3.49 lp cm -1 to 5.76 lp cm -1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm -1 to 5.76 lp cm -1 ) or conical beam (from 3.49 lp cm -1 to 5.56 lp cm -1 ). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm -1 for the parallel beam and from 3.03 to 5.15 lp cm -1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65[Formula: see text]) in proton radiography and greatly accelerate proton computed tomography reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Roux, J. A.
Earlier work based on nonlinear guiding center (NLGC) theory suggested that perpendicular cosmic-ray transport is diffusive when cosmic rays encounter random three-dimensional magnetohydrodynamic turbulence dominated by uniform two-dimensional (2D) turbulence with a minor uniform slab turbulence component. In this approach large-scale perpendicular cosmic-ray transport is due to cosmic rays microscopically diffusing along the meandering magnetic field dominated by 2D turbulence because of gyroresonant interactions with slab turbulence. However, turbulence in the solar wind is intermittent and it has been suggested that intermittent turbulence might be responsible for the observation of 'dropout' events in solar energetic particle fluxes on small scales.more » In a previous paper le Roux et al. suggested, using NLGC theory as a basis, that if gyro-scale slab turbulence is intermittent, large-scale perpendicular cosmic-ray transport in weak uniform 2D turbulence will be superdiffusive or subdiffusive depending on the statistical characteristics of the intermittent slab turbulence. In this paper we expand and refine our previous work further by investigating how both parallel and perpendicular transport are affected by intermittent slab turbulence for weak as well as strong uniform 2D turbulence. The main new finding is that both parallel and perpendicular transport are the net effect of an interplay between diffusive and nondiffusive (superdiffusive or subdiffusive) transport effects as a consequence of this intermittency.« less
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...
2017-08-25
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
Limited angle tomographic breast imaging: A comparison of parallel beam and pinhole collimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wessell, D.E.; Kadrmas, D.J.; Frey, E.C.
1996-12-31
Results from clinical trials have suggested no improvement in lesion detection with parallel hole SPECT scintimammography (SM) with Tc-99m over parallel hole planar SM. In this initial investigation, we have elucidated some of the unique requirements of SPECT SM. With these requirements in mind, we have begun to develop practical data acquisition and reconstruction strategies that can reduce image artifacts and improve image quality. In this paper we investigate limited angle orbits for both parallel hole and pinhole SPECT SM. Singular Value Decomposition (SVD) is used to analyze the artifacts associated with the limited angle orbits. Maximum likelihood expectation maximizationmore » (MLEM) reconstructions are then used to examine the effects of attenuation compensation on the quality of the reconstructed image. All simulations are performed using the 3D-MCAT breast phantom. The results of these simulation studies demonstrate that limited angle SPECT SM is feasible, that attenuation correction is needed for accurate reconstructions, and that pinhole SPECT SM may have an advantage over parallel hole SPECT SM in terms of improved image quality and reduced image artifacts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, H.-Q.; Schlickeiser, R., E-mail: hqhe@mail.iggcas.ac.cn, E-mail: rsch@tp4.rub.de
The cosmic ray mean free path in a large-scale nonuniform guide magnetic field with superposed magnetostatic turbulence is calculated to clarify some conflicting results in the literature. A new, exact integro-differential equation for the cosmic-ray anisotropy is derived from the Fokker-Planck transport equation. A perturbation analysis of this integro-differential equation leads to an analytical expression for the cosmic ray anisotropy and the focused transport equation for the isotropic part of the cosmic ray distribution function. The derived parallel spatial diffusion coefficient and the associated cosmic ray mean free path include the effect of adiabatic focusing and reduce to the standardmore » forms in the limit of a uniform guide magnetic field. For the illustrative case of isotropic pitch angle scattering, the derived mean free path agrees with the earlier expressions of Beeck and Wibberenz, Bieber and Burger, Kota, and Litvinenko, but disagrees with the result of Shalchi. The disagreement with the expression of Shalchi is particularly strong in the limit of strong adiabatic focusing.« less
COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.
We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expressionmore » that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.« less
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Elemental abundances of cosmic rays with Z 33 as measured on HEAO-3
NASA Technical Reports Server (NTRS)
Newport, B. J.; Stone, E. C.; Waddington, C. J.; Binns, W. R.; Garrard, T. L.; Israel, M. H.; Klarmann, J.
1985-01-01
The Heavy Nuclei Experiment on (HEAO-3) high energy astronomy observatory 3 uses a combination of ion chambers and a Cerenkov counter. During analysis, each particle is assigned two parameters, Z sub c and Z sub i, proportional to the square roots of the Cerenkov and mean ionization signals respectively. Because the ionization signal is double valued, a unique assignment of particle charge, Z, is not possible in general. Previous work was limited to particles of either high rigidity or low energy, for which a unique charge assignment was possible, although those subsets contain less than 50% of the total number of particles observed. The maximum likelihood technique was used to determine abundances for the complete data set from approx. 1.5 to approx. 80 GeV/amu.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavel, D.T.
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.
Stamatakis, Alexandros; Ott, Michael
2008-12-27
The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.
NASA Astrophysics Data System (ADS)
Aab, A.; Abreu, P.; Aglietta, M.; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arsene, N.; Asorey, H.; Assis, P.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barbato, F.; Barreira Luz, R. J.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caruso, R.; Castellina, A.; Catalani, F.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Cobos Cerutti, A. C.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Consolati, G.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D’Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D’Olivo, J. C.; Dorosti, Q.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farmer, J.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fenu, F.; Fick, B.; Figueira, J. M.; Filipčič, A.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaïor, R.; García, B.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Halliday, R.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Johnsen, J. A.; Josebachuili, M.; Jurysek, J.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Keilhauer, B.; Kemmerich, N.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; Lago, B. L.; LaHurd, D.; Lang, R. G.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lo Presti, D.; Lopes, L.; López, R.; López Casado, A.; Lorek, R.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Merenda, K.-D.; Michal, S.; Micheletti, M. I.; Middendorf, L.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Morlino, G.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Núñez, L. A.; Oikonomou, F.; Olinto, A.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlin, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Pierog, T.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Poh, J.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Ridky, J.; Riehn, F.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento-Cano, C.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schröder, S.; Schulz, A.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Soriano, J. F.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Stolpovskiy, M.; Strafella, F.; Streich, A.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Šupík, J.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Vázquez, R. A.; Veberič, D.; Ventura, C.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiedeński, M.; Wiencke, L.; Wilczyński, H.; Wirtz, M.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.; The Pierre Auger Collaboration
2018-02-01
A new analysis of the data set from the Pierre Auger Observatory provides evidence for anisotropy in the arrival directions of ultra-high-energy cosmic rays on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources. The data consist of 5514 events above 20 {EeV} with zenith angles up to 80° recorded before 2017 April 30. Sky models have been created for two distinct populations of extragalactic gamma-ray emitters: active galactic nuclei from the second catalog of hard Fermi-LAT sources (2FHL) and starburst galaxies from a sample that was examined with Fermi-LAT. Flux-limited samples, which include all types of galaxies from the Swift-BAT and 2MASS surveys, have been investigated for comparison. The sky model of cosmic-ray density constructed using each catalog has two free parameters, the fraction of events correlating with astrophysical objects, and an angular scale characterizing the clustering of cosmic rays around extragalactic sources. A maximum-likelihood ratio test is used to evaluate the best values of these parameters and to quantify the strength of each model by contrast with isotropy. It is found that the starburst model fits the data better than the hypothesis of isotropy with a statistical significance of 4.0σ, the highest value of the test statistic being for energies above 39 {EeV}. The three alternative models are favored against isotropy with 2.7σ–3.2σ significance. The origin of the indicated deviation from isotropy is examined and prospects for more sensitive future studies are discussed. Any correspondence should be addressed to .
Aab, A.; Abreu, P.; Aglietta, M.; ...
2018-02-02
A new analysis of the dataset from the Pierre Auger Observatory provides evidence for anisotropy in the arrival directions of ultra-high-energy cosmic rays on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources. The data consist of 5514 events above 20 EeV with zenith angles up to 80 deg recorded before 2017 April 30. Sky models have been created for two distinct populations of extragalactic gamma-ray emitters: active galactic nuclei from the second catalog of hard Fermi-LAT sources (2FHL) and starburst galaxies from a sample that was examined with Fermi-LAT. Flux-limited samples, which include allmore » types of galaxies from the Swift-BAT and 2MASS surveys, have been investigated for comparison. The sky model of cosmic-ray density constructed using each catalog has two free parameters, the fraction of events correlating with astrophysical objects and an angular scale characterizing the clustering of cosmic rays around extragalactic sources. A maximum-likelihood ratio test is used to evaluate the best values of these parameters and to quantify the strength of each model by contrast with isotropy. It is found that the starburst model fits the data better than the hypothesis of isotropy with a statistical significance of 4.0 sigma, the highest value of the test statistic being for energies above 39 EeV. The three alternative models are favored against isotropy with 2.7-3.2 sigma significance. The origin of the indicated deviation from isotropy is examined and prospects for more sensitive future studies are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aab, A.; Abreu, P.; Aglietta, M.
A new analysis of the dataset from the Pierre Auger Observatory provides evidence for anisotropy in the arrival directions of ultra-high-energy cosmic rays on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources. The data consist of 5514 events above 20 EeV with zenith angles up to 80 deg recorded before 2017 April 30. Sky models have been created for two distinct populations of extragalactic gamma-ray emitters: active galactic nuclei from the second catalog of hard Fermi-LAT sources (2FHL) and starburst galaxies from a sample that was examined with Fermi-LAT. Flux-limited samples, which include allmore » types of galaxies from the Swift-BAT and 2MASS surveys, have been investigated for comparison. The sky model of cosmic-ray density constructed using each catalog has two free parameters, the fraction of events correlating with astrophysical objects and an angular scale characterizing the clustering of cosmic rays around extragalactic sources. A maximum-likelihood ratio test is used to evaluate the best values of these parameters and to quantify the strength of each model by contrast with isotropy. It is found that the starburst model fits the data better than the hypothesis of isotropy with a statistical significance of 4.0 sigma, the highest value of the test statistic being for energies above 39 EeV. The three alternative models are favored against isotropy with 2.7-3.2 sigma significance. The origin of the indicated deviation from isotropy is examined and prospects for more sensitive future studies are discussed.« less
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W.
2002-01-01
A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral parameter estimates based on the combination of data sets.
Spacecraft Charging and the Microwave Anisotropy Probe Spacecraft
NASA Technical Reports Server (NTRS)
Timothy, VanSant J.; Neergaard, Linda F.
1998-01-01
The Microwave Anisotropy Probe (MAP), a MIDEX mission built in partnership between Princeton University and the NASA Goddard Space Flight Center (GSFC), will study the cosmic microwave background. It will be inserted into a highly elliptical earth orbit for several weeks and then use a lunar gravity assist to orbit around the second Lagrangian point (L2), 1.5 million kilometers, anti-sunward from the earth. The charging environment for the phasing loops and at L2 was evaluated. There is a limited set of data for L2; the GEOTAIL spacecraft measured relatively low spacecraft potentials (approx. 50 V maximum) near L2. The main area of concern for charging on the MAP spacecraft is the well-established threat posed by the "geosynchronous region" between 6-10 Re. The launch in the autumn of 2000 will coincide with the falling of the solar maximum, a period when the likelihood of a substorm is higher than usual. The likelihood of a substorm at that time has been roughly estimated to be on the order of 20% for a typical MAP mission profile. Because of the possibility of spacecraft charging, a requirement for conductive spacecraft surfaces was established early in the program. Subsequent NASCAP/GEO analyses for the MAP spacecraft demonstrated that a significant portion of the sunlit surface (solar cell cover glass and sunshade) could have nonconductive surfaces without significantly raising differential charging. The need for conductive materials on surfaces continually in eclipse has also been reinforced by NASCAP analyses.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Galactic Cosmic-Ray Anistropy During the Forbush Decrease Starting 2013 April 13
NASA Astrophysics Data System (ADS)
Tortermpun, U.; Ruffolo, D.; Bieber, J. W.
2018-01-01
The flux of Galactic cosmic rays (GCRs) can undergo a Forbush decrease (FD) during the passage of a shock, sheath region, or magnetic flux rope associated with a coronal mass ejection (CME). Cosmic-ray observations during FDs can provide information complementary to in situ observations of the local plasma and magnetic field, because cosmic-ray distributions allow remote sensing of distant conditions. Here we develop techniques to determine the GCR anisotropy before and during an FD using data from the worldwide network of neutron monitors, for a case study of the FD starting on 2013 April 13. We find that at times with strong magnetic fluctuations and strong cosmic-ray scattering, there were spikes of high perpendicular anisotropy and weak parallel anisotropy. In contrast, within the CME flux rope there was a strong parallel anisotropy in the direction predicted from a theory of drift motions into one leg of the magnetic flux rope and out the other, confirming that the anisotropy can remotely sense a large-scale flow of GCRs through a magnetic flux structure.
Lu, Jianing; Li, Xiang; Fu, Songnian; Luo, Ming; Xiang, Meng; Zhou, Huibin; Tang, Ming; Liu, Deming
2017-03-06
We present dual-polarization complex-weighted, decision-aided, maximum-likelihood algorithm with superscalar parallelization (SSP-DP-CW-DA-ML) for joint carrier phase and frequency-offset estimation (FOE) in coherent optical receivers. By pre-compensation of the phase offset between signals in dual polarizations, the performance can be substantially improved. Meanwhile, with the help of modified SSP-based parallel implementation, the acquisition time of FO and the required number of training symbols are reduced by transferring the complex weights of the filters between adjacent buffers, where differential coding/decoding is not required. Simulation results show that the laser linewidth tolerance of our proposed algorithm is comparable to traditional blind phase search (BPS), while a complete FOE range of ± symbol rate/2 can be achieved. Finally, performance of our proposed algorithm is experimentally verified under the scenario of back-to-back (B2B) transmission using 10 Gbaud DP-16/32-QAM formats.
Extending the BEAGLE library to a multi-FPGA platform.
Jin, Zheming; Bakos, Jason D
2013-01-19
Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.
A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
Design of neural networks for classification of remotely sensed imagery
NASA Technical Reports Server (NTRS)
Chettri, Samir R.; Cromp, Robert F.; Birmingham, Mark
1992-01-01
Classification accuracies of a backpropagation neural network are discussed and compared with a maximum likelihood classifier (MLC) with multivariate normal class models. We have found that, because of its nonparametric nature, the neural network outperforms the MLC in this area. In addition, we discuss techniques for constructing optimal neural nets on parallel hardware like the MasPar MP-1 currently at GSFC. Other important discussions are centered around training and classification times of the two methods, and sensitivity to the training data. Finally, we discuss future work in the area of classification and neural nets.
The recursive maximum likelihood proportion estimator: User's guide and test results
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.
New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.
McCoy, Airlie J
2002-10-01
Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.
Fitting cosmic microwave background data with cosmic strings and inflation.
Bevis, Neil; Hindmarsh, Mark; Kunz, Martin; Urrestilla, Jon
2008-01-18
We perform a multiparameter likelihood analysis to compare measurements of the cosmic microwave background (CMB) power spectra with predictions from models involving cosmic strings. Adding strings to the standard case of a primordial spectrum with power-law tilt ns, we find a 2sigma detection of strings: f10=0.11+/-0.05, where f10 is the fractional contribution made by strings in the temperature power spectrum (at l=10). CMB data give moderate preference to the model ns=1 with cosmic strings over the standard zero-strings model with variable tilt. When additional non-CMB data are incorporated, the two models become on a par. With variable ns and these extra data, we find that f10<0.11, which corresponds to Gmicro<0.7x10(-6) (where micro is the string tension and G is the gravitational constant).
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
Long-Term Solar and Cosmic Radiation Data Bases
1991-01-01
determine the magnitude of the variations in the cosmic ray intensity caused by solar activity. Neutron monitors, with their much lower energy threshold...expression that neutron monitors are sensors on spacecraft EARTH. Here we will consider cosmic ray detectors to measure two components of cosmic ...A comparison with the solar cycle as illustrated by the sunspot number in Fig. 1. shows that the maximum cosmic ray intensity occurs near sunspot
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Computation of nonparametric convex hazard estimators via profile methods.
Jankowski, Hanna K; Wellner, Jon A
2009-05-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.
NASA Astrophysics Data System (ADS)
Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie
2016-01-01
It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.
The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data
NASA Technical Reports Server (NTRS)
Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.;
2013-01-01
The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation
The local time dependence of the anisotropic solar cosmic ray flux.
Smart, D F; Shea, M A
2003-01-01
The distribution of the solar cosmic radiation flux over the earth is not uniform, but the result of complex phenomena involving the interplanetary magnetic field, the geomagnetic field and latitude and longitude of locations on the earth. The latitude effect relates to the geomagnetic shield; the longitude effect relates to local time. For anisotropic solar cosmic ray events the maximum particle flux is always along the interplanetary magnetic field direction, sometimes called the Archimedean spiral path from the sun to the earth. During anisotropic solar cosmic ray event, the locations on the earth viewing "sunward" into the interplanetary magnetic field direction will observe the largest flux (when adjustments are made for the magnetic latitude effect). To relate this phenomena to aircraft routes, for anisotropic solar cosmic ray events that occur during "normal quiescent" conditions, the maximum solar cosmic ray flux (and corresponding solar particle radiation dose) will be observed in the dawn quadrant, ideally at about 06 hours local time. Published by Elsevier Ltd on behalf of COSPAR.
A maximum likelihood map of chromosome 1.
Rao, D C; Keats, B J; Lalouel, J M; Morton, N E; Yee, S
1979-01-01
Thirteen loci are mapped on chromosome 1 from genetic evidence. The maximum likelihood map presented permits confirmation that Scianna (SC) and a fourteenth locus, phenylketonuria (PKU), are on chromosome 1, although the location of the latter on the PGM1-AMY segment is uncertain. Eight other controversial genetic assignments are rejected, providing a practical demonstration of the resolution which maximum likelihood theory brings to mapping. PMID:293128
ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Maximum likelihood estimation of signal-to-noise ratio and combiner weight
NASA Technical Reports Server (NTRS)
Kalson, S.; Dolinar, S. J.
1986-01-01
An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.
Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine
1999-01-01
Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...
Amplitude analysis of four-body decays using a massively-parallel fitting framework
NASA Astrophysics Data System (ADS)
Hasse, C.; Albrecht, J.; Alves, A. A., Jr.; d'Argent, P.; Evans, T. D.; Rademacker, J.; Sokoloff, M. D.
2017-10-01
The GooFit Framework is designed to perform maximum-likelihood fits for arbitrary functions on various parallel back ends, for example a GPU. We present an extension to GooFit which adds the functionality to perform time-dependent amplitude analyses of pseudoscalar mesons decaying into four pseudoscalar final states. Benchmarks of this functionality show a significant performance increase when utilizing a GPU compared to a CPU. Furthermore, this extension is employed to study the sensitivity on the {{{D}}}0-{\\bar{{{D}}}}0 mixing parameters x and y in a time-dependent amplitude analysis of the decay D0 → K+π-π+π-. Studying a sample of 50 000 events and setting the central values to the world average of x = (0.49 ± 0.15)% and y = (0.61 ± 0.08)%, the statistical sensitivities of x and y are determined to be σ(x) = 0.019 % and σ(y) = 0.019 %.
Pulsar Emission Geometry and Accelerating Field Strength
NASA Technical Reports Server (NTRS)
DeCesar, Megan E.; Harding, Alice K.; Miller, M. Coleman; Kalapotharakos, Constantinos; Parent, Damien
2012-01-01
The high-quality Fermi LAT observations of gamma-ray pulsars have opened a new window to understanding the generation mechanisms of high-energy emission from these systems, The high statistics allow for careful modeling of the light curve features as well as for phase resolved spectral modeling. We modeled the LAT light curves of the Vela and CTA I pulsars with simulated high-energy light curves generated from geometrical representations of the outer gap and slot gap emission models. within the vacuum retarded dipole and force-free fields. A Markov Chain Monte Carlo maximum likelihood method was used to explore the phase space of the magnetic inclination angle, viewing angle. maximum emission radius, and gap width. We also used the measured spectral cutoff energies to estimate the accelerating parallel electric field dependence on radius. under the assumptions that the high-energy emission is dominated by curvature radiation and the geometry (radius of emission and minimum radius of curvature of the magnetic field lines) is determined by the best fitting light curves for each model. We find that light curves from the vacuum field more closely match the observed light curves and multiwavelength constraints, and that the calculated parallel electric field can place additional constraints on the emission geometry
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
IceCube Collaboration; Pierre Auger Collaboration; Telescope Array Collaboration
2016-01-01
This paper presents the results of different searches for correlations between very high-energy neutrino candidates detected by IceCube and the highest-energy cosmic rays measured by the Pierre Auger Observatory and the Telescope Array. We first consider samples of cascade neutrino events and of high-energy neutrino-induced muon tracks, which provided evidence for a neutrino flux of astrophysical origin, and study their cross-correlation with the ultrahigh-energy cosmic ray (UHECR) samples as a function of angular separation. We also study their possible directional correlations using a likelihood method stacking the neutrino arrival directions and adopting different assumptions on the size of the UHECRmore » magnetic deflections. Finally, we perform another likelihood analysis stacking the UHECR directions and using a sample of through-going muon tracks optimized for neutrino point-source searches with sub-degree angular resolution. No indications of correlations at discovery level are obtained for any of the searches performed. The smallest of the p-values comes from the search for correlation between UHECRs with IceCube high-energy cascades, a result that should continue to be monitored.« less
Aartsen, M. G.
2016-01-20
This study presents the results of different searches for correlations between very high-energy neutrino candidates detected by IceCube and the highest-energy cosmic rays measured by the Pierre Auger Observatory and the Telescope Array. We first consider samples of cascade neutrino events and of high-energy neutrino-induced muon tracks, which provided evidence for a neutrino flux of astrophysical origin, and study their cross-correlation with the ultrahigh-energy cosmic ray (UHECR) samples as a function of angular separation. We also study their possible directional correlations using a likelihood method stacking the neutrino arrival directions and adopting different assumptions on the size of the UHECRmore » magnetic deflections. Finally, we perform another likelihood analysis stacking the UHECR directions and using a sample of through-going muon tracks optimized for neutrino point-source searches with sub-degree angular resolution. No indications of correlations at discovery level are obtained for any of the searches performed. The smallest of the p-values comes from the search for correlation between UHECRs with IceCube high-energy cascades, a result that should continue to be monitored.« less
Alfven wave transport effects in the time evolution of parallel cosmic-ray modified shocks
NASA Technical Reports Server (NTRS)
Jones, T. W.
1993-01-01
Some of the issues associated with a more complete treatment of Alfven transport in cosmic ray shocks are explored qualitatively. The treatment is simplified in some important respects, but some new issues are examined and for the first time a nonlinear, time dependent study of plane cosmic ray mediated shocks with both the entropy producing effects of wave dissipation and effects due to the Alfven wave advection of the cosmic ray relative to the gas is included. Examination of the direct consequences of including the pressure and energy of the Alfven waves in the formalism began.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
Observation in the MINOS far detector of the shadowing of cosmic rays by the sun and moon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaffe, D.E.; Bishai, M.; Diwan, M.V.
2010-10-10
The shadowing of cosmic ray primaries by the moon and sun was observed by the MINOS far detector at a depth of 2070 mwe using 83.54 million cosmic ray muons accumulated over 1857.91 live-days. The shadow of the moon was detected at the 5.6 {sigma} level and the shadow of the sun at the 3.8 {sigma} level using a log-likelihood search in celestial coordinates. The moon shadow was used to quantify the absolute astrophysical pointing of the detector to be 0.17 {+-} 0.12{sup o}. Hints of interplanetary magnetic field effects were observed in both the sun and moon shadow.
Observation in the MINOS far detector of the shadowing of cosmic rays by the sun and moon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamson, P.; /Fermilab; Andreopoulos, C.
2010-08-01
The shadowing of cosmic ray primaries by the the moon and sun was observed by the MINOS far detector at a depth of 2070 mwe using 83.54 million cosmic ray muons accumulated over 1857.91 live-days. The shadow of the moon was detected at the 5.6 {sigma} level and the shadow of the sun at the 3.8 {sigma} level using a log-likelihood search in celestial coordinates. The moon shadow was used to quantify the absolute astrophysical pointing of the detector to be 0.17 {+-} 0.12{sup o}. Hints of Interplanetary Magnetic Field effects were observed in both the sun and moon shadow.
Time variation of galactic cosmic rays
NASA Technical Reports Server (NTRS)
Evenson, Paul
1988-01-01
Time variations in the flux of galactic cosmic rays are the result of changing conditions in the solar wind. Maximum cosmic ray fluxes, which occur when solar activity is at a minimum, are well defined. Reductions from this maximum level are typically systematic and predictable but on occasion are rapid and unexpected. Models relating the flux level at lower energy to that at neutron monitor energy are typically accurate to 20 percent of the total excursion at that energy. Other models, relating flux to observables such as sunspot number, flare frequency, and current sheet tilt are phenomenological but nevertheless can be quite accurate.
Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps
NASA Astrophysics Data System (ADS)
Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine
2015-08-01
We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Modulation and coding for satellite and space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.; Simon, Marvin K.; Pollara, Fabrizio; Divsalar, Dariush; Miller, Warner H.; Morakis, James C.; Ryan, Carl R.
1990-01-01
Several modulation and coding advances supported by NASA are summarized. To support long-constraint-length convolutional code, a VLSI maximum-likelihood decoder, utilizing parallel processing techniques, which is being developed to decode convolutional codes of constraint length 15 and a code rate as low as 1/6 is discussed. A VLSI high-speed 8-b Reed-Solomon decoder which is being developed for advanced tracking and data relay satellite (ATDRS) applications is discussed. A 300-Mb/s modem with continuous phase modulation (CPM) and codings which is being developed for ATDRS is discussed. Trellis-coded modulation (TCM) techniques are discussed for satellite-based mobile communication applications.
NASA Technical Reports Server (NTRS)
Scholz, D.; Fuhs, N.; Hixson, M.
1979-01-01
The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.
Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia
2015-04-01
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.
Extending the BEAGLE library to a multi-FPGA platform
2013-01-01
Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707
The radial distribution of cosmic rays in the heliosphere at solar maximum
NASA Astrophysics Data System (ADS)
McDonald, F. B.; Fujii, Z.; Heikkila, B.; Lal, N.
2003-08-01
To obtain a more detailed profile of the radial distribution of galactic (GCRs) and anomalous (ACRs) cosmic rays, a unique time in the 11-year solar activity cycle has been selected - that of solar maximum. At this time of minimum cosmic ray intensity a simple, straight-forward normalization technique has been found that allows the cosmic ray data from IMP 8, Pioneer 10 (P-10) and Voyagers 1 and 2 (V1, V2) to be combined for the solar maxima of cycles 21, 22 and 23. This combined distribution reveals a functional form of the radial gradient that varies as G 0/r with G 0 being constant and relatively small in the inner heliosphere. After a transition region between ˜10 and 20 AU, G 0 increases to a much larger value that remains constant between ˜25 and 82 AU. This implies that at solar maximum the changes that produce the 11-year modulation cycle are mainly occurring in the outer heliosphere between ˜15 AU and the termination shock. These observations are not inconsistent with the concept that Global Merged Interaction. regions (GMIRs) are the principal agent of modulation between solar minimum and solar maximum. There does not appear to be a significant change in the amount of heliosheath modulation occurring between the 1997 solar minimum and the cycle 23 solar maximum.
On the maximum energy of shock-accelerated cosmic rays at ultra-relativistic shocks
NASA Astrophysics Data System (ADS)
Reville, B.; Bell, A. R.
2014-04-01
The maximum energy to which cosmic rays can be accelerated at weakly magnetised ultra-relativistic shocks is investigated. We demonstrate that for such shocks, in which the scattering of energetic particles is mediated exclusively by ion skin-depth scale structures, as might be expected for a Weibel-mediated shock, there is an intrinsic limit on the maximum energy to which particles can be accelerated. This maximum energy is determined from the requirement that particles must be isotropized in the downstream plasma frame before the mean field transports them far downstream, and falls considerably short of what is required to produce ultra-high-energy cosmic rays. To circumvent this limit, a highly disorganized field is required on larger scales. The growth of cosmic ray-induced instabilities on wavelengths much longer than the ion-plasma skin depth, both upstream and downstream of the shock, is considered. While these instabilities may play an important role in magnetic field amplification at relativistic shocks, on scales comparable to the gyroradius of the most energetic particles, the calculated growth rates have insufficient time to modify the scattering. Since strong modification is a necessary condition for particles in the downstream region to re-cross the shock, in the absence of an alternative scattering mechanism, these results imply that acceleration to higher energies is ruled out. If weakly magnetized ultra-relativistic shocks are disfavoured as high-energy particle accelerators in general, the search for potential sources of ultra-high-energy cosmic rays can be narrowed.
Maximum-Likelihood Detection Of Noncoherent CPM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference
1990-11-01
Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR
The cosmic-ray shock structure problem for relativistic shocks
NASA Technical Reports Server (NTRS)
Webb, G. M.
1985-01-01
The time asymptotic behaviour of a relativistic (parallel) shock wave significantly modified by the diffusive acceleration of cosmic-rays is investigated by means of relativistic hydrodynamical equations for both the cosmic-rays and thermal gas. The form of the shock structure equation and the dispersion relation for both long and short wavelength waves in the system are obtained. The dependence of the shock acceleration efficiency on the upstream fluid spped, long wavelength Mach number and the ratio N = P sub co/cP sub co+P sub go)(Psub co and P sub go are the upstream cosmic-ray and thermal gas pressures respectively) are studied.
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
Relative likelihood for life as a function of cosmic time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loeb, Abraham; Batista, Rafael A.; Sloan, David, E-mail: aloeb@cfa.harvard.edu, E-mail: rafael.alvesbatista@physics.ox.ac.uk, E-mail: david.sloan@physics.ox.ac.uk
2016-08-01
Is life most likely to emerge at the present cosmic time near a star like the Sun? We address this question by calculating the relative formation probability per unit time of habitable Earth-like planets within a fixed comoving volume of the Universe, dP ( t )/ dt , starting from the first stars and continuing to the distant cosmic future. We conservatively restrict our attention to the context of ''life as we know it'' and the standard cosmological model, ΛCDM . We find that unless habitability around low mass stars is suppressed, life is most likely to exist near ∼more » 0.1 M {sub ⊙} stars ten trillion years from now. Spectroscopic searches for biosignatures in the atmospheres of transiting Earth-mass planets around low mass stars will determine whether present-day life is indeed premature or typical from a cosmic perspective.« less
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Solution of Heliospheric Propagation: Unveiling the Local Interstellar Spectra of Cosmic-ray Species
NASA Astrophysics Data System (ADS)
Boschini, M. J.; Della Torre, S.; Gervasi, M.; Grandi, D.; Jóhannesson, G.; Kachelriess, M.; La Vacca, G.; Masi, N.; Moskalenko, I. V.; Orlando, E.; Ostapchenko, S. S.; Pensotti, S.; Porter, T. A.; Quadrani, L.; Rancoita, P. G.; Rozza, D.; Tacconi, M.
2017-05-01
Local interstellar spectra (LIS) for protons, helium, and antiprotons are built using the most recent experimental results combined with state-of-the-art models for propagation in the Galaxy and heliosphere. Two propagation packages, GALPROP and HelMod, are combined to provide a single framework that is run to reproduce direct measurements of cosmic-ray (CR) species at different modulation levels and at both polarities of the solar magnetic field. To do so in a self-consistent way, an iterative procedure was developed, where the GALPROP LIS output is fed into HelMod, providing modulated spectra for specific time periods of selected experiments to compare with the data; the HelMod parameter optimization is performed at this stage and looped back to adjust the LIS using the new GALPROP run. The parameters were tuned with the maximum likelihood procedure using an extensive data set of proton spectra from 1997 to 2015. The proposed LIS accommodate both the low-energy interstellar CR spectra measured by Voyager 1 and the high-energy observations by BESS, Pamela, AMS-01, and AMS-02 made from the balloons and near-Earth payloads; it also accounts for Ulysses counting rate features measured out of the ecliptic plane. The found solution is in a good agreement with proton, helium, and antiproton data by AMS-02, BESS, and PAMELA in the whole energy range.
VizieR Online Data Catalog: Local interstellar spectra of cosmic-ray species (Boschini+, 2017)
NASA Astrophysics Data System (ADS)
Boschini, M. J.; Torre, S. D.; Gervasi, M.; Grandi, D.; Johannesson, G.; Kachelriess, M.; La Vacca, G.; Masi, N.; Moskalenko, I. V.; Orlando, E.; Ostapchenko, S. S.; Pensotti, S.; Porter, T. A.; Quadrani, L.; Rancoita, P. G.; Rozza, D.; Tacconi, M.
2017-11-01
Local interstellar spectra (LIS) for protons, helium, and antiprotons are built using the most recent experimental results combined with state-of-the-art models for propagation in the Galaxy and heliosphere. Two propagation packages, GALPROP and HelMod, are combined to provide a single framework that is run to reproduce direct measurements of cosmic-ray (CR) species at different modulation levels and at both polarities of the solar magnetic field. To do so in a self-consistent way, an iterative procedure was developed, where the GALPROP LIS output is fed into HelMod, providing modulated spectra for specific time periods of selected experiments to compare with the data; the HelMod parameter optimization is performed at this stage and looped back to adjust the LIS using the new GALPROP run. The parameters were tuned with the maximum likelihood procedure using an extensive data set of proton spectra from 1997 to 2015. The proposed LIS accommodate both the low-energy interstellar CR spectra measured by Voyager 1 and the high-energy observations by BESS, Pamela, AMS-01, and AMS-02 made from the balloons and near-Earth payloads; it also accounts for Ulysses counting rate features measured out of the ecliptic plane. The found solution is in a good agreement with proton, helium, and antiproton data by AMS-02, BESS, and PAMELA in the whole energy range. (3 data files).
Solution of Heliospheric Propagation: Unveiling the Local Interstellar Spectra of Cosmic-ray Species
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boschini, M. J.; Torre, S. Della; Gervasi, M.
2017-05-10
Local interstellar spectra (LIS) for protons, helium, and antiprotons are built using the most recent experimental results combined with state-of-the-art models for propagation in the Galaxy and heliosphere. Two propagation packages, GALPROP and HelMod, are combined to provide a single framework that is run to reproduce direct measurements of cosmic-ray (CR) species at different modulation levels and at both polarities of the solar magnetic field. To do so in a self-consistent way, an iterative procedure was developed, where the GALPROP LIS output is fed into HelMod, providing modulated spectra for specific time periods of selected experiments to compare with themore » data; the HelMod parameter optimization is performed at this stage and looped back to adjust the LIS using the new GALPROP run. The parameters were tuned with the maximum likelihood procedure using an extensive data set of proton spectra from 1997 to 2015. The proposed LIS accommodate both the low-energy interstellar CR spectra measured by Voyager 1 and the high-energy observations by BESS, Pamela, AMS-01, and AMS-02 made from the balloons and near-Earth payloads; it also accounts for Ulysses counting rate features measured out of the ecliptic plane. The found solution is in a good agreement with proton, helium, and antiproton data by AMS-02, BESS, and PAMELA in the whole energy range.« less
Closed timelike curves produced by pairs of moving cosmic strings - Exact solutions
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III
1991-01-01
Exact solutions of Einstein's field equations are presented for the general case of two moving straight cosmic strings that do not intersect. The solutions for parallel cosmic strings moving in opposite directions show closed timelike curves (CTCs) that circle the two strings as they pass, allowing observers to visit their own past. Similar results occur for nonparallel strings, and for masses in (2+1)-dimensional spacetime. For finite string loops the possibility that black-hole formation may prevent the formation of CTCs is discussed.
Krukowski, Karen; Feng, Xi; Paladini, Maria Serena; Chou, Austin; Sacramento, Kristen; Grue, Katherine; Riparip, Lara-Kirstie; Jones, Tamako; Campbell-Beachler, Mary; Nelson, Gregory; Rosi, Susanna
2018-05-18
Microglia are the main immune component in the brain that can regulate neuronal health and synapse function. Exposure to cosmic radiation can cause long-term cognitive impairments in rodent models thereby presenting potential obstacles for astronauts engaged in deep space travel. The mechanism/s for how cosmic radiation induces cognitive deficits are currently unknown. We find that temporary microglia depletion, one week after cosmic radiation, prevents the development of long-term memory deficits. Gene array profiling reveals that acute microglia depletion alters the late neuroinflammatory response to cosmic radiation. The repopulated microglia present a modified functional phenotype with reduced expression of scavenger receptors, lysosome membrane protein and complement receptor, all shown to be involved in microglia-synapses interaction. The lower phagocytic activity observed in the repopulated microglia is paralleled by improved synaptic protein expression. Our data provide mechanistic evidence for the role of microglia in the development of cognitive deficits after cosmic radiation exposure.
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
ERIC Educational Resources Information Center
Jones, Douglas H.
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Flight data processing with the F-8 adaptive algorithm
NASA Technical Reports Server (NTRS)
Hartmann, G.; Stein, G.; Petersen, K.
1977-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described
Kang, Hae Ji; Bennett, Shannon N.; Dizney, Laurie; Sumibcay, Laarni; Arai, Satoru; Ruedas, Luis A.; Song, Jin-Won; Yanagihara, Richard
2009-01-01
A genetically distinct hantavirus, designated Oxbow virus (OXBV), was detected in tissues of an American shrew mole (Neurotrichus gibbsii), captured in Gresham, Oregon, in September 2003. Pairwise analysis of full-length S- and M- and partial L-segment nucleotide and amino acid sequences of OXBV indicated low sequence similarity with rodent-borne hantaviruses. Phylogenetic analyses using maximum-likelihood and Bayesian methods, and host-parasite evolutionary comparisons, showed that OXBV and Asama virus, a hantavirus recently identified from the Japanese shrew mole (Urotrichus talpoides), were related to soricine shrew-borne hantaviruses from North America and Eurasia, respectively, suggesting parallel evolution associated with cross-species transmission. PMID:19394994
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
Closed-loop carrier phase synchronization techniques motivated by likelihood functions
NASA Technical Reports Server (NTRS)
Tsou, H.; Hinedi, S.; Simon, M.
1994-01-01
This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.
Moreno-Letelier, Alejandra; Olmedo, Gabriela; Eguiarte, Luis E.; Martinez-Castilla, Leon; Souza, Valeria
2011-01-01
The high affinity phosphate transport system (pst) is crucial for phosphate uptake in oligotrophic environments. Cuatro Cienegas Basin (CCB) has extremely low P levels and its endemic Bacillus are closely related to oligotrophic marine Firmicutes. Thus, we expected the pst operon of CCB to share the same evolutionary history and protein similarity to marine Firmicutes. Orthologs of the pst operon were searched in 55 genomes of Firmicutes and 13 outgroups. Phylogenetic reconstructions were performed for the pst operon and 14 concatenated housekeeping genes using maximum likelihood methods. Conserved domains and 3D structures of the phosphate-binding protein (PstS) were also analyzed. The pst operon of Firmicutes shows two highly divergent clades with no correlation to the type of habitat nor a phylogenetic congruence, suggesting horizontal gene transfer. Despite sequence divergence, the PstS protein had a similar 3D structure, which could be due to parallel evolution after horizontal gene transfer events. PMID:21461370
Advanced complex trait analysis.
Gray, A; Stewart, I; Tenesa, A
2012-12-01
The Genome-wide Complex Trait Analysis (GCTA) software package can quantify the contribution of genetic variation to phenotypic variation for complex traits. However, as those datasets of interest continue to increase in size, GCTA becomes increasingly computationally prohibitive. We present an adapted version, Advanced Complex Trait Analysis (ACTA), demonstrating dramatically improved performance. We restructure the genetic relationship matrix (GRM) estimation phase of the code and introduce the highly optimized parallel Basic Linear Algebra Subprograms (BLAS) library combined with manual parallelization and optimization. We introduce the Linear Algebra PACKage (LAPACK) library into the restricted maximum likelihood (REML) analysis stage. For a test case with 8999 individuals and 279,435 single nucleotide polymorphisms (SNPs), we reduce the total runtime, using a compute node with two multi-core Intel Nehalem CPUs, from ∼17 h to ∼11 min. The source code is fully available under the GNU Public License, along with Linux binaries. For more information see http://www.epcc.ed.ac.uk/software-products/acta. a.gray@ed.ac.uk Supplementary data are available at Bioinformatics online.
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Low-complexity approximations to maximum likelihood MPSK modulation classification
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2004-01-01
We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.
Noise correlations in cosmic microwave background experiments
NASA Technical Reports Server (NTRS)
Dodelson, Scott; Kosowsky, Arthur; Myers, Steven T.
1995-01-01
Many analysis of microwave background experiments neglect the correlation of noise in different frequency of polarization channels. We show that these correlations, should they be present, can lead to serve misinterpretation of an experiment. In particular, correlated noise arising from either electronics or atmosphere may mimic a cosmic signal. We quantify how the likelihood function for a given experiment varies with noise correlation, using both simple analytic models and actual data. For a typical microwave background anisotropy experiment, noise correlations at the level of 1% of the overall noise can seriously reduce the significance of a given detection.
NASA Astrophysics Data System (ADS)
Zhao, W.; Baskaran, D.; Grishchuk, L. P.
2009-10-01
The relic gravitational waves are the cleanest probe of the violent times in the very early history of the Universe. They are expected to leave signatures in the observed cosmic microwave background anisotropies. We significantly improved our previous analysis [W. Zhao, D. Baskaran, and L. P. Grishchuk, Phys. Rev. DPRVDAQ1550-7998 79, 023002 (2009)10.1103/PhysRevD.79.023002] of the 5-year WMAP TT and TE data at lower multipoles ℓ. This more general analysis returned essentially the same maximum likelihood result (unfortunately, surrounded by large remaining uncertainties): The relic gravitational waves are present and they are responsible for approximately 20% of the temperature quadrupole. We identify and discuss the reasons by which the contribution of gravitational waves can be overlooked in a data analysis. One of the reasons is a misleading reliance on data from very high multipoles ℓ and another a too narrow understanding of the problem as the search for B modes of polarization, rather than the detection of relic gravitational waves with the help of all correlation functions. Our analysis of WMAP5 data has led to the identification of a whole family of models characterized by relatively high values of the likelihood function. Using the Fisher matrix formalism we formulated forecasts for Planck mission in the context of this family of models. We explore in detail various “optimistic,” “pessimistic,” and “dream case” scenarios. We show that in some circumstances the B-mode detection may be very inconclusive, at the level of signal-to-noise ratio S/N=1.75, whereas a smarter data analysis can reveal the same gravitational wave signal at S/N=6.48. The final result is encouraging. Even under unfavorable conditions in terms of instrumental noises and foregrounds, the relic gravitational waves, if they are characterized by the maximum likelihood parameters that we found from WMAP5 data, will be detected by Planck at the level S/N=3.65.
NASA Technical Reports Server (NTRS)
Goldstein, M. L.
1977-01-01
In a study of cosmic ray propagation in interstellar and interplanetary space, a perturbed orbit resonant scattering theory for pitch angle diffusion in a slab model of magnetostatic turbulence is slightly generalized and used to compute the diffusion coefficient for spatial propagation parallel to the mean magnetic field. This diffusion coefficient has been useful for describing the solar modulation of the galactic cosmic rays, and for explaining the diffusive phase in solar flares in which the initial anisotropy of the particle distribution decays to isotropy.
Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.
NASA Technical Reports Server (NTRS)
Thadani, S. G.
1977-01-01
The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.
Maximum-likelihood block detection of noncoherent continuous phase modulation
NASA Technical Reports Server (NTRS)
Simon, Marvin K.; Divsalar, Dariush
1993-01-01
This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.
Design of simplified maximum-likelihood receivers for multiuser CPM systems.
Bing, Li; Bai, Baoming
2014-01-01
A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
Cosmic-ray streaming and anisotropies
NASA Technical Reports Server (NTRS)
Forman, M. A.; Gleeson, L. J.
1975-01-01
The paper is concerned with the differential current densities and anisotropies that exist in the interplanetary cosmic-ray gas, and in particular with a correct formulation and simple interpretation of the momentum equation that describes these on a local basis. Two examples of the use of this equation in the interpretation of previous data are given. It is demonstrated that in interplanetary space, the electric-field drifts and convective flow parallel to the magnetic field of cosmic-ray particles combine as a simple convective flow with the solar wind, and that there exist diffusive currents and transverse gradient drift currents. Thus direct reference to the interplanetary electric-field drifts is eliminated, and the study of steady-state and transient cosmic-ray anisotropies is both more systematic and simpler.
Klein-Gordon oscillator with position-dependent mass in the rotating cosmic string spacetime
NASA Astrophysics Data System (ADS)
Wang, Bing-Qian; Long, Zheng-Wen; Long, Chao-Yun; Wu, Shu-Rui
2018-02-01
A spinless particle coupled covariantly to a uniform magnetic field parallel to the string in the background of the rotating cosmic string is studied. The energy levels of the electrically charged particle subject to the Klein-Gordon oscillator are analyzed. Afterwards, we consider the case of the position-dependent mass and show how these energy levels depend on the parameters in the problem. Remarkably, it shows that for the special case, the Klein-Gordon oscillator coupled covariantly to a homogeneous magnetic field with the position-dependent mass in the rotating cosmic string background has the similar behaviors to the Klein-Gordon equation with a Coulomb-type configuration in a rotating cosmic string background in the presence of an external magnetic field.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freon, A.; Berry, J.; Coste, J.-P.
1959-02-01
Some recordings of the variations of intensity of cosmic neutrons, made since October 1956 at the observatory of the Pic du Midi and since July 1957 on the Kerguelen Islands, have shown the existence, since the beginning of the observations and during at least 20 solar rotations, of a cyclic variation with a stable period equal to 27.35 plus or minus 0.1 solar days and a maximum amplitude of 2.2% attained in October 1957. (tr-auth)
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2010-01-01
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-07-01
We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
Transient cosmic ray increase associated with a geomagnetic storm
NASA Technical Reports Server (NTRS)
Kudo, S.; Wada, M.; Tanskanen, P.; Kodama, M.
1985-01-01
On the basis of worldwide network data of cosmic ray nucleonic components, the transient cosmic ray increase due to the depression of cosmic ray cutoff rigidity during a severe geomagnetic storm was investigated in terms of the longitudinal dependence. Multiple correlation analysis among isotropic and diurnal terms of cosmic ray intensity variations and Dst term of the geomagnetic field is applied to each of various station's data. It is shown that the amplitude of the transient cosmic ray increase associated with Dst depends on the local time of the station, and that its maximum phase is found in the evening sector. This fact is consistent with the theoretical estimation based on the azimuthally asymmetric ring current model for the magnetic DS field.
Parameterized spectral distributions for meson production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Francis A.
1995-01-01
Accurate semiempirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations, which depend on both the outgoing meson parallel momentum and the incident proton kinetic energy, are able to be reduced to very simple analytical formulas suitable for cosmic ray transport through spacecraft walls, interstellar space, the atmosphere, and meteorites.
A Parallel, High-Fidelity Radar Model
2010-09-01
THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 . TCMB is the temperature due to the cosmic microwave background ...per unit area, per unit frequency. In the microwave regime, this is usually given the name brightness temperature, . There are various sources...which contribute to the brightness temperature. They include external sources outside of the earth’s atmosphere (e.g. cosmic or galactic noise
Constraints on the Galactic Halo Dark Matter from Fermi-LAT Diffuse Measurements
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Blandford, R. D.; Bloom, E. D.;
2012-01-01
We have performed an analysis of the diffuse gamma-ray emission with the Fermi Large Area Telescope (LAT) in the Milky Way halo region, searching for a signal from dark matter annihilation or decay. In the absence of a robust dark matter signal, constraints are presented. We consider both gamma rays produced directly in the dark matter annihilation/decay and produced by inverse Compton scattering of the e+/e- produced in the annihilation/decay. Conservative limits are derived requiring that the dark matter signal does not exceed the observed diffuse gamma-ray emission. A second set of more stringent limits is derived based on modeling the foreground astrophysical diffuse emission using the GALPROP code. Uncertainties in the height of the diffusive cosmic-ray halo, the distribution of the cosmic-ray sources in the Galaxy, the index of the injection cosmic-ray electron spectrum, and the column density of the interstellar gas are taken into account using a profile likelihood formalism, while the parameters governing the cosmic-ray propagation have been derived from fits to local cosmic-ray data. The resulting limits impact the range of particle masses over which dark matter thermal production in the early universe is possible, and challenge the interpretation of the PAMELA/Fermi-LAT cosmic ray anomalies as the annihilation of dark matter.
2010-01-01
Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504
Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.
ERIC Educational Resources Information Center
Ramsay, J. O.
1980-01-01
Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
Campos-Filho, N; Franco, E L
1989-02-01
A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.
The Maximum Likelihood Solution for Inclination-only Data
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2006-12-01
The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag
Precision Parameter Estimation and Machine Learning
NASA Astrophysics Data System (ADS)
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Connecting blazars with ultrahigh-energy cosmic rays and astrophysical neutrinos
NASA Astrophysics Data System (ADS)
Resconi, E.; Coenders, S.; Padovani, P.; Giommi, P.; Caccianiga, L.
2017-06-01
We present a strong hint of a connection between high-energy γ-ray emitting blazars, very high energy neutrinos, and ultrahigh-energy cosmic rays. We first identify potential hadronic sources by filtering γ-ray emitters in spatial coincidence with the high-energy neutrinos detected by IceCube. The neutrino filtered γ-ray emitters are then correlated with the ultrahigh-energy cosmic rays from the Pierre Auger Observatory and the Telescope Array by scanning in γ-ray flux (Fγ) and angular separation (θ) between sources and cosmic rays. A maximal excess of 80 cosmic rays (42.5 expected) is found at θ ≤ 10° from the neutrino-filtered γ-ray emitters selected from the second hard Fermi-LAT catalogue (2FHL) and for Fγ(>50 GeV) ≥ 1.8 × 10-11 ph cm-2 s-1. The probability for this to happen is 2.4 × 10-5, which translates to ˜2.4 × 10-3 after compensation for all the considered trials. No excess of cosmic rays is instead observed for the complement sample of γ-ray emitters (I.e. not in spatial connection with IceCube neutrinos). A likelihood ratio test comparing the connection between the neutrino-filtered and the complement source samples with the cosmic rays favours a connection between neutrino-filtered emitters and cosmic rays with a probability of ˜1.8 × 10-3 (2.9σ) after compensation for all the considered trials. The neutrino-filtered γ-ray sources that make up the cosmic rays excess are blazars of the high synchrotron peak type. More statistics is needed to further investigate these sources as candidate cosmic ray and neutrino emitters.
Algorithms of maximum likelihood data clustering with applications
NASA Astrophysics Data System (ADS)
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
NASA Technical Reports Server (NTRS)
Mccallister, R. D.; Crawford, J. J.
1981-01-01
It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.
PAMLX: a graphical user interface for PAML.
Xu, Bo; Yang, Ziheng
2013-12-01
This note announces pamlX, a graphical user interface/front end for the paml (for Phylogenetic Analysis by Maximum Likelihood) program package (Yang Z. 1997. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 13:555-556; Yang Z. 2007. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 24:1586-1591). pamlX is written in C++ using the Qt library and communicates with paml programs through files. It can be used to create, edit, and print control files for paml programs and to launch paml runs. The interface is available for free download at http://abacus.gene.ucl.ac.uk/software/paml.html.
Phylogenetic analyses of mode of larval development.
Hart, M
2000-12-01
Phylogenies based on morphological or molecular characters have been used to provide an evolutionary context for analysis of larval evolution. Studies of gastropods, bivalves, tunicates, sea stars, sea urchins, and polychaetes have revealed massive parallel evolution of similar larval forms. Some of these studies were designed to test, and have rejected, the species selection hypothesis for evolutionary trends in the frequency of derived larvae or life history traits. However, the lack of well supported models of larval character evolution leave some doubt about the quality of inferences of larval evolution from phylogenies of living taxa. Better models based on maximum likelihood methods and known prior probabilities of larval character state changes will improve our understanding of the history of larval evolution. Copyright 2000 Academic Press.
Constraining high-energy neutrino emission from choked jets in stripped-envelope supernovae
NASA Astrophysics Data System (ADS)
Senno, Nicholas; Murase, Kohta; Mészáros, Peter
2018-01-01
There are indications that γ-ray dark objects such as supernovae (SNe) with choked jets, and the cores of active galactic nuclei may contribute to the diffuse flux of astrophysical neutrinos measured by the IceCube observatory. In particular, stripped-envelope SNe have received much attention since they are capable of producing relativistic jets and could explain the diversity in observations of collapsar explosions (e.g., gamma-ray bursts (GRBs), low-luminosity GRBs, and Type Ibc SNe). We use an unbinned maximum likelihood method to search for spatial and temporal coincidences between Type Ibc core-collapse SNe, which may harbor a choked jet, and muon neutrinos from a sample of IceCube up-going track-like events measured from May 2011–May 2012. In this stacking analysis, we find no significant deviation from a background-only hypothesis using one year of data, and are able to place upper limits on the total amount of isotropic equivalent energy that choked jet core-collapse SNe deposit in cosmic rays Script Ecr and the fraction of core-collapse SNe which have a jet pointed towards Earth fjet. This analysis can be extended with yet to be made public IceCube data, and the increased amount of optically detected core-collapse SNe discovered by wide field-of-view surveys such as the Palomar Transient Factory and All-Sky Automated Survey for Supernovae. The choked jet SNe/high-energy cosmic neutrino connection can be more tightly constrained in the near future.
NASA Astrophysics Data System (ADS)
Boschini, M. J.; Della Torre, S.; Gervasi, M.; Grandi, D.; Jóhannesson, G.; La Vacca, G.; Masi, N.; Moskalenko, I. V.; Pensotti, S.; Porter, T. A.; Quadrani, L.; Rancoita, P. G.; Rozza, D.; Tacconi, M.
2018-02-01
The local interstellar spectrum (LIS) of cosmic-ray (CR) electrons for the energy range 1 MeV to 1 TeV is derived using the most recent experimental results combined with the state-of-the-art models for CR propagation in the Galaxy and in the heliosphere. Two propagation packages, GALPROP and HELMOD, are combined to provide a single framework that is run to reproduce direct measurements of CR species at different modulation levels, and at both polarities of the solar magnetic field. An iterative maximum-likelihood method is developed that uses GALPROP-predicted LIS as input to HELMOD, which provides the modulated spectra for specific time periods of the selected experiments for model-data comparison. The optimized HelMod parameters are then used to adjust GALPROP parameters to predict a refined LIS with the procedure repeated subject to a convergence criterion. The parameter optimization uses an extensive data set of proton spectra from 1997 to 2015. The proposed CR electron LIS accommodates both the low-energy interstellar spectra measured by Voyager 1 as well as the high-energy observations by PAMELA and AMS-02 that are made deep in the heliosphere; it also accounts for Ulysses counting rate features measured out of the ecliptic plane. The interstellar and heliospheric propagation parameters derived in this study agree well with our earlier results for CR protons, helium nuclei, and anti-protons propagation and LIS obtained in the same framework.
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Maximum likelihood phase-retrieval algorithm: applications.
Nahrstedt, D A; Southwell, W H
1984-12-01
The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
Wu, Yufeng
2012-03-01
Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.
Hipp, Andrew L; Manos, Paul S; González-Rodríguez, Antonio; Hahn, Marlene; Kaproth, Matthew; McVay, John D; Avalos, Susana Valencia; Cavender-Bares, Jeannine
2018-01-01
Oaks (Quercus, Fagaceae) are the dominant tree genus of North America in species number and biomass, and Mexico is a global center of oak diversity. Understanding the origins of oak diversity is key to understanding biodiversity of northern temperate forests. A phylogenetic study of biogeography, niche evolution and diversification patterns in Quercus was performed using 300 samples, 146 species. Next-generation sequencing data were generated using the restriction-site associated DNA (RAD-seq) method. A time-calibrated maximum likelihood phylogeny was inferred and analyzed with bioclimatic, soils, and leaf habit data to reconstruct the biogeographic and evolutionary history of the American oaks. Our highly resolved phylogeny demonstrates sympatric parallel diversification in climatic niche, leaf habit, and diversification rates. The two major American oak clades arose in what is now the boreal zone and radiated, in parallel, from eastern North America into Mexico and Central America. Oaks adapted rapidly to niche transitions. The Mexican oaks are particularly numerous, not because Mexico is a center of origin, but because of high rates of lineage diversification associated with high rates of evolution along moisture gradients and between the evergreen and deciduous leaf habits. Sympatric parallel diversification in the oaks has shaped the diversity of North American forests. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Neumann, M; Herten, D P; Dietrich, A; Wolfrum, J; Sauer, M
2000-02-25
The first capillary array scanner for time-resolved fluorescence detection in parallel capillary electrophoresis based on semiconductor technology is described. The system consists essentially of a confocal fluorescence microscope and a x,y-microscope scanning stage. Fluorescence of the labelled probe molecules was excited using a short-pulse diode laser emitting at 640 nm with a repetition rate of 50 MHz. Using a single filter system the fluorescence decays of different labels were detected by an avalanche photodiode in combination with a PC plug-in card for time-correlated single-photon counting (TCSPC). The time-resolved fluorescence signals were analyzed and identified by a maximum likelihood estimator (MLE). The x,y-microscope scanning stage allows for discontinuous, bidirectional scanning of up to 16 capillaries in an array, resulting in longer fluorescence collection times per capillary compared to scanners working in a continuous mode. Synchronization of the alignment and measurement process were developed to allow for data acquisition without overhead. Detection limits in the subzeptomol range for different dye molecules separated in parallel capillaries have been achieved. In addition, we report on parallel time-resolved detection and separation of more than 400 bases of single base extension DNA fragments in capillary array electrophoresis. Using only semiconductor technology the presented technique represents a low-cost alternative for high throughput DNA sequencing in parallel capillaries.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2018-01-01
We investigate whether a Gaussian likelihood, as routinely assumed in the analysis of cosmological data, is supported by simulated survey data. We define test statistics, based on a novel method that first destroys Gaussian correlations in a data set, and then measures the non-Gaussian correlations that remain. This procedure flags pairs of data points that depend on each other in a non-Gaussian fashion, and thereby identifies where the assumption of a Gaussian likelihood breaks down. Using this diagnosis, we find that non-Gaussian correlations in the CFHTLenS cosmic shear correlation functions are significant. With a simple exclusion of the most contaminated data points, the posterior for s8 is shifted without broadening, but we find no significant reduction in the tension with s8 derived from Planck cosmic microwave background data. However, we also show that the one-point distributions of the correlation statistics are noticeably skewed, such that sound weak-lensing data sets are intrinsically likely to lead to a systematically low lensing amplitude being inferred. The detected non-Gaussianities get larger with increasing angular scale such that for future wide-angle surveys such as Euclid or LSST, with their very small statistical errors, the large-scale modes are expected to be increasingly affected. The shifts in posteriors may then not be negligible and we recommend that these diagnostic tests be run as part of future analyses.
Rapidly moving cosmic strings and chronology protection
NASA Astrophysics Data System (ADS)
Ori, Amos
1991-10-01
Recently, Gott has provided a family of solutions of the Einstein equations describing pairs of parallel cosmic strings in motion. He has shown that if the strings' relative velocity is sufficiently high, there exist closed timelike curves (CTC's) in the spacetime. Here we show that if there are CTC's in such a solution, then every t=const hypersurface in the spacetime intersects CTC's. Therefore, these solutions do not contradict the chronology protection conjecture of Hawking.
On the effect of the neutral Hydrogen density on the 26 day variations of galactic cosmic rays
NASA Astrophysics Data System (ADS)
Engelbrecht, Nicholas; Burger, Renier; Ferreira, Stefan; Hitge, Mariette
Preliminary results of a 3D, steady-state ab-initio cosmic ray modulation code are presented. This modulation code utilizes analytical expressions for the parallel and perpendicular mean free paths based on the work of Teufel and Schlickeiser (2003) and Shalchi et al. (2004), incorporating Breech et al. (2008)'s model for the 2D variance, correlation scale, and normalized cross helicity. The effects of such a model for basic turbulence quantities, coupled with a 3D model for the neutral Hydrogen density on the 26-day variations of cosmic rays, is investigated, utilizing a Schwadron-Parker hybrid heliospheric magnetic field.
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniel, Scott F.; Linder, Eric V.; Lawrence Berkeley National Laboratory, Berkeley, California
Deviations from general relativity, such as could be responsible for the cosmic acceleration, would influence the growth of large-scale structure and the deflection of light by that structure. We clarify the relations between several different model-independent approaches to deviations from general relativity appearing in the literature, devising a translation table. We examine current constraints on such deviations, using weak gravitational lensing data of the CFHTLS and COSMOS surveys, cosmic microwave background radiation data of WMAP5, and supernova distance data of Union2. A Markov chain Monte Carlo likelihood analysis of the parameters over various redshift ranges yields consistency with general relativitymore » at the 95% confidence level.« less
On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro
2005-01-01
Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…
Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.
2003-01-01
The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.
ERIC Educational Resources Information Center
Mayberry, Paul W.
A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…
The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.
ERIC Educational Resources Information Center
Baldwin, Beatrice
The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
NASA Astrophysics Data System (ADS)
Banik, Prabir; Bhadra, Arunava
2017-06-01
It is widely believed that Galactic cosmic rays are originated in supernova remnants (SNRs), where they are accelerated by a diffusive shock acceleration (DSA) process in supernova blast waves driven by expanding SNRs. In recent theoretical developments of the DSA theory in SNRs, protons are expected to accelerate in SNRs at least up to the knee energy. If SNRs are the true generators of cosmic rays, they should accelerate not only protons but also heavier nuclei with the right proportions, and the maximum energy of the heavier nuclei should be the atomic number (Z ) times the mass of the proton. In this work, we investigate the implications of the acceleration of heavier nuclei in SNRs on energetic gamma rays produced in the hadronic interaction of cosmic rays with ambient matter. Our findings suggest that the energy conversion efficiency has to be nearly double for the mixed cosmic ray composition compared to that of pure protons to explain observations. In addition, the gamma-ray flux above a few tens of TeV would be significantly higher if cosmic ray particles could attain energies Z times the knee energy in lieu of 200 TeV, as suggested earlier for nonamplified magnetic fields. The two stated maximum energy paradigms will be discriminated in the future by upcoming gamma-ray experiments like the Cherenkov telescope array (CTA).
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Cosmology with the cosmic microwave background temperature-polarization correlation
NASA Astrophysics Data System (ADS)
Couchot, F.; Henrot-Versillé, S.; Perdereau, O.; Plaszczynski, S.; Rouillé d'Orfeuil, B.; Spinelli, M.; Tristram, M.
2017-06-01
We demonstrate that the cosmic microwave background (CMB) temperature-polarization cross-correlation provides accurate and robust constraints on cosmological parameters. We compare them with the results from temperature or polarization and investigate the impact of foregrounds, cosmic variance, and instrumental noise. This analysis makes use of the Planck high-ℓ HiLLiPOP likelihood based on angular power spectra, which takes into account systematics from the instrument and foreground residuals directly modelled using Planck measurements. The temperature-polarization correlation (TE) spectrum is less contaminated by astrophysical emissions than the temperature power spectrum (TT), allowing constraints that are less sensitive to foreground uncertainties to be derived. For ΛCDM parameters, TE gives very competitive results compared to TT. For basic ΛCDM model extensions (such as AL, ∑mν, or Neff), it is still limited by the instrumental noise level in the polarization maps.
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Xiao, Yong; Cheng, Xinyi; Li, Deng; Wang, Liwei
2016-02-01
For the continuous crystal-based positron emission tomography (PET) detector built in our lab, a maximum likelihood algorithm adapted for implementation on a field programmable gate array (FPGA) is proposed to estimate the three-dimensional (3D) coordinate of interaction position with the single-end detected scintillation light response. The row-sum and column-sum readout scheme organizes the 64 channels of photomultiplier (PMT) into eight row signals and eight column signals to be readout for X- and Y-coordinates estimation independently. By the reference events irradiated in a known oblique angle, the probability density function (PDF) for each depth-of-interaction (DOI) segment is generated, by which the reference events in perpendicular irradiation are assigned to DOI segments for generating the PDFs for X and Y estimation in each DOI layer. Evaluated by the experimental data, the algorithm achieves an average X resolution of 1.69 mm along the central X-axis, and DOI resolution of 3.70 mm over the whole thickness (0-10 mm) of crystal. The performance improvements from 2D estimation to the 3D algorithm are also presented. Benefiting from abundant resources of FPGA and a hierarchical storage arrangement, the whole algorithm can be implemented into a middle-scale FPGA. By a parallel structure in pipelines, the 3D position estimator on the FPGA can achieve a processing throughput of 15 M events/s, which is sufficient for the requirement of real-time PET imaging.
Inference from the small scales of cosmic shear with current and future Dark Energy Survey data
MacCrann, N.; Aleksić, J.; Amara, A.; ...
2016-11-05
Cosmic shear is sensitive to fluctuations in the cosmological matter density field, including on small physical scales, where matter clustering is affected by baryonic physics in galaxies and galaxy clusters, such as star formation, supernovae feedback and AGN feedback. While muddying any cosmological information that is contained in small scale cosmic shear measurements, this does mean that cosmic shear has the potential to constrain baryonic physics and galaxy formation. We perform an analysis of the Dark Energy Survey (DES) Science Verification (SV) cosmic shear measurements, now extended to smaller scales, and using the Mead et al. 2015 halo model tomore » account for baryonic feedback. While the SV data has limited statistical power, we demonstrate using a simulated likelihood analysis that the final DES data will have the statistical power to differentiate among baryonic feedback scenarios. We also explore some of the difficulties in interpreting the small scales in cosmic shear measurements, presenting estimates of the size of several other systematic effects that make inference from small scales difficult, including uncertainty in the modelling of intrinsic alignment on nonlinear scales, `lensing bias', and shape measurement selection effects. For the latter two, we make use of novel image simulations. While future cosmic shear datasets have the statistical power to constrain baryonic feedback scenarios, there are several systematic effects that require improved treatments, in order to make robust conclusions about baryonic feedback.« less
An evaluation of percentile and maximum likelihood estimators of weibull paremeters
Stanley J. Zarnoch; Tommy R. Dell
1985-01-01
Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...
ERIC Educational Resources Information Center
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2005-01-01
In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…
Unclassified Publications of Lincoln Laboratory, 1 January - 31 December 1990. Volume 16
1990-12-31
Apr. 1990 ADA223419 Hopped Communication Systems with Nonuniform Hopping Distributions 880 Bistatic Radar Cross Section of a Fenn, A.J. 2 May1990...EXPERIMENT JA-6241 MS-8424 LUNAR PERTURBATION MAXIMUM LIKELIHOOD ALGORITHM JA-6241 JA-6467 LWIR SPECTRAL BAND MAXIMUM LIKELIHOOD ESTIMATOR JA-6476 MS-8466
Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data
ERIC Educational Resources Information Center
Savalei, Victoria
2010-01-01
Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
ERIC Educational Resources Information Center
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
John Hogland; Nedret Billor; Nathaniel Anderson
2013-01-01
Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...
NASA Technical Reports Server (NTRS)
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
NASA Astrophysics Data System (ADS)
Dunkley, J.; Spergel, D. N.; Komatsu, E.; Hinshaw, G.; Larson, D.; Nolta, M. R.; Odegard, N.; Page, L.; Bennett, C. L.; Gold, B.; Hill, R. S.; Jarosik, N.; Weiland, J. L.; Halpern, M.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wollack, E.; Wright, E. L.
2009-08-01
We describe a sampling method to estimate the polarized cosmic microwave background (CMB) signal from observed maps of the sky. We use a Metropolis-within-Gibbs algorithm to estimate the polarized CMB map, containing Q and U Stokes parameters at each pixel, and its covariance matrix. These can be used as inputs for cosmological analyses. The polarized sky signal is parameterized as the sum of three components: CMB, synchrotron emission, and thermal dust emission. The polarized Galactic components are modeled with spatially varying power-law spectral indices for the synchrotron, and a fixed power law for the dust, and their component maps are estimated as by-products. We apply the method to simulated low-resolution maps with pixels of side 7.2 deg, using diagonal and full noise realizations drawn from the WMAP noise matrices. The CMB maps are recovered with goodness of fit consistent with errors. Computing the likelihood of the E-mode power in the maps as a function of optical depth to reionization, τ, for fixed temperature anisotropy power, we recover τ = 0.091 ± 0.019 for a simulation with input τ = 0.1, and mean τ = 0.098 averaged over 10 simulations. A "null" simulation with no polarized CMB signal has maximum likelihood consistent with τ = 0. The method is applied to the five-year WMAP data, using the K, Ka, Q, and V channels. We find τ = 0.090 ± 0.019, compared to τ = 0.086 ± 0.016 from the template-cleaned maps used in the primary WMAP analysis. The synchrotron spectral index, β, averaged over high signal-to-noise pixels with standard deviation σ(β) < 0.25, but excluding ~6% of the sky masked in the Galactic plane, is -3.03 ± 0.04. This estimate does not vary significantly with Galactic latitude, although includes an informative prior. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
Constraints on the Galactic Halo Dark Matter From FERMI-LAT Diffuse Measurements
Ackermann, M.; Ajello, M.; Atwood, W. B.; ...
2012-11-28
For this study, we have performed an analysis of the diffuse gamma-ray emission with the Fermi Large Area Telescope (LAT) in the Milky Way halo region, searching for a signal from dark matter annihilation or decay. In the absence of a robust dark matter signal, constraints are presented. We consider both gamma rays produced directly in the dark matter annihilation/decay and produced by inverse Compton scattering of the e +/e – produced in the annihilation/decay. Conservative limits are derived requiring that the dark matter signal does not exceed the observed diffuse gamma-ray emission. A second set of more stringent limitsmore » is derived based on modeling the foreground astrophysical diffuse emission using the GALPROP code. Uncertainties in the height of the diffusive cosmic-ray halo, the distribution of the cosmic-ray sources in the Galaxy, the index of the injection cosmic-ray electron spectrum, and the column density of the interstellar gas are taken into account using a profile likelihood formalism, while the parameters governing the cosmic-ray propagation have been derived from fits to local cosmic-ray data. In conclusion, the resulting limits impact the range of particle masses over which dark matter thermal production in the early universe is possible, and challenge the interpretation of the PAMELA/Fermi-LAT cosmic ray anomalies as the annihilation of dark matter.« less
Exposure to galactic cosmic radiation and solar energetic particles.
O'Sullivan, D
2007-01-01
Several investigations of the radiation field at aircraft altitudes have been undertaken during solar cycle 23 which occurred in the period 1993-2003. The radiation field is produced by the passage of galactic cosmic rays and their nuclear reaction products as well as solar energetic particles through the Earth's atmosphere. Galactic cosmic rays reach a maximum intensity when the sun is least active and are at minimum intensity during solar maximum period. During solar maximum an increased number of coronal mass ejections and solar flares produce high energy solar particles which can also penetrate down to aircraft altitudes. It is found that the very complicated field resulting from these processes varies with altitude, latitude and stage of solar cycle. By employing several active and passive detectors, the whole range of radiation types and energies were encompassed. In-flight data was obtained with the co-operation of many airlines and NASA. The EURADOS Aircraft Crew in-flight data base was used for comparison with the predictions of various computer codes. A brief outline of some recent studies of exposure to radiation in Earth orbit will conclude this contribution.
Does electromagnetic radiation accelerate galactic cosmic rays
NASA Technical Reports Server (NTRS)
Eichler, D.
1977-01-01
The 'reactor' theories of Tsytovich and collaborators (1973) of cosmic-ray acceleration by electromagnetic radiation are examined in the context of galactic cosmic rays. It is shown that any isotropic synchrotron or Compton reactors with reasonable astrophysical parameters can yield particles with a maximum relativistic factor of only about 10,000. If they are to produce particles with higher relativistic factors, the losses due to inverse Compton scattering of the electromagnetic radiation in them outweigh the acceleration, and this violates the assumptions of the theory. This is a critical restriction in the context of galactic cosmic rays, which have a power-law spectrum extending up to a relativistic factor of 1 million.
Erich Regener and the ionisation maximum of the atmosphere
NASA Astrophysics Data System (ADS)
Carlson, P.; Watson, A. A.
2014-12-01
In the 1930s the German physicist Erich Regener (1881-1955) did important work on the measurement of the rate of production of ionisation deep under water and in the atmosphere. Along with one of his students, Georg Pfotzer, he discovered the altitude at which the production of ionisation in the atmosphere reaches a maximum, often, but misleadingly, called the Pfotzer maximum. Regener was one of the first to estimate the energy density of cosmic rays, an estimate that was used by Baade and Zwicky to bolster their postulate that supernovae might be their source. Yet Regener's name is less recognised by present-day cosmic ray physicists than it should be, largely because in 1937 he was forced to take early retirement by the National Socialists as his wife had Jewish ancestors. In this paper we briefly review his work on cosmic rays and recommend an alternative naming of the ionisation maximum. The influence that Regener had on the field through his son, his son-in-law, his grandsons and his students, and through his links with Rutherford's group in Cambridge, is discussed in an appendix. Regener was nominated for the Nobel Prize in Physics by Schrödinger in 1938. He died in 1955 at the age of 73.
NASA Astrophysics Data System (ADS)
Storm, Emma; Weniger, Christoph; Calore, Francesca
2017-08-01
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Planck intermediate results. XVI. Profile likelihoods for cosmological parameters
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bonaldi, A.; Bond, J. R.; Bouchet, F. R.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Liddle, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski∗, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rouillé d'Orfeuil, B.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Savelainen, M.; Savini, G.; Spencer, L. D.; Spinelli, M.; Starck, J.-L.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-06-01
We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the ΛCDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agreement with the cosmological results from the Bayesian framework is excellent, demonstrating the robustness of the Planck results to the statistical methodology. We investigate the inclusion of neutrino masses, where more significant differences may appear due to the non-Gaussian nature of the posterior mass distribution. By applying the Feldman-Cousins prescription, we again obtain results very similar to those of the Bayesian methodology. However, the profile-likelihood analysis of the cosmic microwave background (CMB) combination (Planck+WP+highL) reveals a minimum well within the unphysical negative-mass region. We show that inclusion of the Planck CMB-lensing information regularizes this issue, and provide a robust frequentist upper limit ∑ mν ≤ 0.26 eV (95% confidence) from the CMB+lensing+BAO data combination.
Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan
2005-01-01
In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…
12-mode OFDM transmission using reduced-complexity maximum likelihood detection.
Lobato, Adriana; Chen, Yingkan; Jung, Yongmin; Chen, Haoshuo; Inan, Beril; Kuschnerov, Maxim; Fontaine, Nicolas K; Ryf, Roland; Spinnler, Bernhard; Lankl, Berthold
2015-02-01
We report the transmission of 163-Gb/s MDM-QPSK-OFDM and 245-Gb/s MDM-8QAM-OFDM transmission over 74 km of few-mode fiber supporting 12 spatial and polarization modes. A low-complexity maximum likelihood detector is employed to enhance the performance of a system impaired by mode-dependent loss.
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key
ERIC Educational Resources Information Center
France, Stephen L.; Batchelder, William H.
2015-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
ERIC Educational Resources Information Center
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
NASA Technical Reports Server (NTRS)
Kelly, D. A.; Fermelia, A.; Lee, G. K. F.
1990-01-01
An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
NASA Astrophysics Data System (ADS)
Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.
2017-10-01
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
NASA Astrophysics Data System (ADS)
Roche-Lima, Abiel; Thulasiram, Ruppa K.
2012-02-01
Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.
Review of Recent Methodological Developments in Group-Randomized Trials: Part 2-Analysis.
Turner, Elizabeth L; Prague, Melanie; Gallis, John A; Li, Fan; Murray, David M
2017-07-01
In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have updated that review with developments in analysis of the past 13 years, with a companion article to focus on developments in design. We discuss developments in the topics of the earlier review (e.g., methods for parallel-arm GRTs, individually randomized group-treatment trials, and missing data) and in new topics, including methods to account for multiple-level clustering and alternative estimation methods (e.g., augmented generalized estimating equations, targeted maximum likelihood, and quadratic inference functions). In addition, we describe developments in analysis of alternative group designs (including stepped-wedge GRTs, network-randomized trials, and pseudocluster randomized trials), which require clustering to be accounted for in their design and analysis.
NASA Astrophysics Data System (ADS)
Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander
2017-04-01
Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the possibility to perform cross-validation at the level of some grouping structure. As an example, in remote sensing of agricultural land uses, pixels from the same field contain nearly identical information and will thus be jointly placed in either the test set or the training set. Other spatial sampling resampling strategies are already available and can be extended by the user.
Cosmic ray acceleration in magnetic circumstellar bubbles
NASA Astrophysics Data System (ADS)
Zirakashvili, V. N.; Ptuskin, V. S.
2018-03-01
We consider the diffusive shock acceleration in interstellar bubbles created by powerful stellar winds of supernova progenitors. Under the moderate stellar wind magnetization the bubbles are filled by the strongly magnetized low density gas. It is shown that the maximum energy of particles accelerated in this environment can exceed the "knee" energy in the observable cosmic ray spectrum.
Constraints on Galactic Neutrino Emission with Seven Years of IceCube Data
NASA Astrophysics Data System (ADS)
Aartsen, M. G.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Samarai, I. Al; Altmann, D.; Andeen, K.; Anderson, T.; Ansseau, I.; Anton, G.; Argüelles, C.; Auffenberg, J.; Axani, S.; Bagherpour, H.; Bai, X.; Barron, J. P.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; BenZvi, S.; Berley, D.; Bernardini, E.; Besson, D. Z.; Binder, G.; Bindig, D.; Blaufuss, E.; Blot, S.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Bourbeau, J.; Bradascio, F.; Braun, J.; Brayeur, L.; Brenzke, M.; Bretz, H.-P.; Bron, S.; Burgman, A.; Carver, T.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Clark, K.; Classen, L.; Coenders, S.; Collin, G. H.; Conrad, J. M.; Cowen, D. F.; Cross, R.; Day, M.; de André, J. P. A. M.; De Clercq, C.; DeLaunay, J. J.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; di Lorenzo, V.; Dujmovic, H.; Dumm, J. P.; Dunkman, M.; Eberhardt, B.; Ehrhardt, T.; Eichmann, B.; Eller, P.; Evenson, P. A.; Fahey, S.; Fazely, A. R.; Felde, J.; Filimonov, K.; Finley, C.; Flis, S.; Franckowiak, A.; Friedman, E.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Giang, W.; Glauch, T.; Glüsenkamp, T.; Goldschmidt, A.; Gonzalez, J. G.; Grant, D.; Griffith, Z.; Haack, C.; Hallgren, A.; Halzen, F.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Hokanson-Fasig, B.; Hoshina, K.; Huang, F.; Huber, M.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jeong, M.; Jero, K.; Jones, B. J. P.; Kalacynski, P.; Kang, W.; Kappes, A.; Karg, T.; Karle, A.; Katz, U.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kheirandish, A.; Kim, J.; Kim, M.; Kintscher, T.; Kiryluk, J.; Kittler, T.; Klein, S. R.; Kohnen, G.; Koirala, R.; Kolanoski, H.; Köpke, L.; Kopper, C.; Kopper, S.; Koschinsky, J. P.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, M.; Krückl, G.; Kunnen, J.; Kunwar, S.; Kurahashi, N.; Kuwabara, T.; Kyriacou, A.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lauber, F.; Lennarz, D.; Lesiak-Bzdak, M.; Leuermann, M.; Liu, Q. R.; Lu, L.; Lünemann, J.; Luszczak, W.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Mancina, S.; Maruyama, R.; Mase, K.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meier, M.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Micallef, J.; Momenté, G.; Montaruli, T.; Moore, R. W.; Moulai, M.; Nahnhauer, R.; Nakarmi, P.; Naumann, U.; Neer, G.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke Pollmann, A.; Olivas, A.; O'Murchadha, A.; Palczewski, T.; Pandya, H.; Pankova, D. V.; Peiffer, P.; Pepper, J. A.; Pérez de los Heros, C.; Pieloth, D.; Pinat, E.; Plum, M.; Price, P. B.; Przybylski, G. T.; Raab, C.; Rädel, L.; Rameez, M.; Rawlins, K.; Reimann, R.; Relethford, B.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ryckbosch, D.; Rysewyk, D.; Sälzer, T.; Sanchez Herrera, S. E.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Sarkar, S.; Satalecka, K.; Schlunder, P.; Schmidt, T.; Schneider, A.; Schoenen, S.; Schöneberg, S.; Schumacher, L.; Seckel, D.; Seunarine, S.; Soldin, D.; Song, M.; Spiczak, G. M.; Spiering, C.; Stachurska, J.; Stanev, T.; Stasik, A.; Stettner, J.; Steuer, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taboada, I.; Tatar, J.; Tenholt, F.; Ter-Antonyan, S.; Terliuk, A.; Tešić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tosi, D.; Tselengidou, M.; Tung, C. F.; Turcati, A.; Turley, C. F.; Ty, B.; Unger, E.; Usner, M.; Vandenbroucke, J.; Van Driessche, W.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Vehring, M.; Vogel, E.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandler, F. D.; Wandkowsky, N.; Waza, A.; Weaver, C.; Weiss, M. J.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Wickmann, S.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wills, L.; Wolf, M.; Wood, J.; Wood, T. R.; Woolsey, E.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Yuan, T.; Zoll, M.; IceCube Collaboration
2017-11-01
The origins of high-energy astrophysical neutrinos remain a mystery despite extensive searches for their sources. We present constraints from seven years of IceCube Neutrino Observatory muon data on the neutrino flux coming from the Galactic plane. This flux is expected from cosmic-ray interactions with the interstellar medium or near localized sources. Two methods were developed to test for a spatially extended flux from the entire plane, both of which are maximum likelihood fits but with different signal and background modeling techniques. We consider three templates for Galactic neutrino emission based primarily on gamma-ray observations and models that cover a wide range of possibilities. Based on these templates and in the benchmark case of an unbroken {E}-2.5 power-law energy spectrum, we set 90% confidence level upper limits, constraining the possible Galactic contribution to the diffuse neutrino flux to be relatively small, less than 14% of the flux reported in Aartsen et al. above 1 TeV. A stacking method is also used to test catalogs of known high-energy Galactic gamma-ray sources.
Detectability of large-scale power suppression in the galaxy distribution
NASA Astrophysics Data System (ADS)
Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan
2010-12-01
Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.
The Atacama Cosmology Telescope (ACT): Beam Profiles and First SZ Cluster Maps
NASA Technical Reports Server (NTRS)
Hincks, A. D.; Acquaviva, V.; Ade, P. A.; Aguirre, P.; Amiri, M.; Appel, J. W.; Barrientos, L. F.; Battistelli, E. S.; Bond, J. R.; Brown, B.;
2010-01-01
The Atacama Cosmology Telescope (ACT) is currently observing the cosmic microwave background with arcminute resolution at 148 GHz, 218 GHz, and 277 GHz, In this paper, we present ACT's first results. Data have been analyzed using a maximum-likelihood map-making method which uses B-splines to model and remove the atmospheric signal. It has been used to make high-precision beam maps from which we determine the experiment's window functions, This beam information directly impacts all subsequent analyses of the data. We also used the method to map a sample of galaxy clusters via the Sunyaev-Ze1'dovich (SZ) effect, and show five clusters previously detected with X-ray or SZ observations, We provide integrated Compton-y measurements for each cluster. Of particular interest is our detection of the z = 0.44 component of A3128 and our current non-detection of the low-redshift part, providing strong evidence that the further cluster is more massive as suggested by X-ray measurements. This is a compelling example of the redshift-independent mass selection of the SZ effect.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
Amplification of perpendicular and parallel magnetic fields by cosmic ray currents
NASA Astrophysics Data System (ADS)
Matthews, J. H.; Bell, A. R.; Blundell, K. M.; Araudo, A. T.
2017-08-01
Cosmic ray (CR) currents through magnetized plasma drive strong instabilities producing amplification of the magnetic field. This amplification helps explain the CR energy spectrum as well as observations of supernova remnants and radio galaxy hotspots. Using magnetohydrodynamic simulations, we study the behaviour of the non-resonant hybrid (NRH) instability (also known as the Bell instability) in the case of CR currents perpendicular and parallel to the initial magnetic field. We demonstrate that extending simulations of the perpendicular case to 3D reveals a different character to the turbulence from that observed in 2D. Despite these differences, in 3D the perpendicular NRH instability still grows exponentially far into the non-linear regime with a similar growth rate to both the 2D perpendicular and 3D parallel situations. We introduce some simple analytical models to elucidate the physical behaviour, using them to demonstrate that the transition to the non-linear regime is governed by the growth of thermal pressure inside dense filaments at the edges of the expanding loops. We discuss our results in the context of supernova remnants and jets in radio galaxies. Our work shows that the NRH instability can amplify magnetic fields to many times their initial value in parallel and perpendicular shocks.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.
1976-01-01
The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
ERIC Educational Resources Information Center
Khattab, Ali-Maher; And Others
1982-01-01
A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.
2018-03-01
Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.
Measuring the Largest Angular Scale CMB B-mode Polarization with Galactic Foregrounds on a Cut Sky
NASA Astrophysics Data System (ADS)
Watts, Duncan J.; Larson, David; Marriage, Tobias A.; Abitbol, Maximilian H.; Appel, John W.; Bennett, Charles L.; Chuss, David T.; Eimer, Joseph R.; Essinger-Hileman, Thomas; Miller, Nathan J.; Rostem, Karwan; Wollack, Edward J.
2015-12-01
We consider the effectiveness of foreground cleaning in the recovery of Cosmic Microwave Background (CMB) polarization sourced by gravitational waves for tensor-to-scalar ratios in the range 0\\lt r\\lt 0.1. Using the planned survey area, frequency bands, and sensitivity of the Cosmology Large Angular Scale Surveyor (CLASS), we simulate maps of Stokes Q and U parameters at 40, 90, 150, and 220 GHz, including realistic models of the CMB, diffuse Galactic thermal dust and synchrotron foregrounds, and Gaussian white noise. We use linear combinations (LCs) of the simulated multifrequency data to obtain maximum likelihood estimates of r, the relative scalar amplitude s, and LC coefficients. We find that for 10,000 simulations of a CLASS-like experiment using only measurements of the reionization peak ({\\ell }≤slant 23), there is a 95% C.L. upper limit of r\\lt 0.017 in the case of no primordial gravitational waves. For simulations with r=0.01, we recover at 68% C.L. r={0.012}-0.006+0.011. The reionization peak corresponds to a fraction of the multipole moments probed by CLASS, and simulations including 30≤slant {\\ell }≤slant 100 further improve our upper limits to r\\lt 0.008 at 95% C.L. (r={0.010}-0.004+0.004 for primordial gravitational waves with r = 0.01). In addition to decreasing the current upper bound on r by an order of magnitude, these foreground-cleaned low multipole data will achieve a cosmic variance limited measurement of the E-mode polarization’s reionization peak.
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
On non-parametric maximum likelihood estimation of the bivariate survivor function.
Prentice, R L
The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.
Scalar Casimir densities and forces for parallel plates in cosmic string spacetime
NASA Astrophysics Data System (ADS)
Bezerra de Mello, E. R.; Saharian, A. A.; Abajyan, S. V.
2018-04-01
We analyze the Green function, the Casimir densities and forces associated with a massive scalar quantum field confined between two parallel plates in a higher dimensional cosmic string spacetime. The plates are placed orthogonal to the string, and the field obeys the Robin boundary conditions on them. The boundary-induced contributions are explicitly extracted in the vacuum expectation values (VEVs) of the field squared and of the energy-momentum tensor for both the single plate and two plates geometries. The VEV of the energy-momentum tensor, in additional to the diagonal components, contains an off diagonal component corresponding to the shear stress. The latter vanishes on the plates in special cases of Dirichlet and Neumann boundary conditions. For points outside the string core the topological contributions in the VEVs are finite on the plates. Near the string the VEVs are dominated by the boundary-free part, whereas at large distances the boundary-induced contributions dominate. Due to the nonzero off diagonal component of the vacuum energy-momentum tensor, in addition to the normal component, the Casimir forces have nonzero component parallel to the boundary (shear force). Unlike the problem on the Minkowski bulk, the normal forces acting on the separate plates, in general, do not coincide if the corresponding Robin coefficients are different. Another difference is that in the presence of the cosmic string the Casimir forces for Dirichlet and Neumann boundary conditions differ. For Dirichlet boundary condition the normal Casimir force does not depend on the curvature coupling parameter. This is not the case for other boundary conditions. A new qualitative feature induced by the cosmic string is the appearance of the shear stress acting on the plates. The corresponding force is directed along the radial coordinate and vanishes for Dirichlet and Neumann boundary conditions. Depending on the parameters of the problem, the radial component of the shear force can be either positive or negative.
Testing cosmic ray acceleration with radio relics: a high-resolution study using MHD and tracers
NASA Astrophysics Data System (ADS)
Wittor, D.; Vazza, F.; Brüggen, M.
2017-02-01
Weak shocks in the intracluster medium may accelerate cosmic-ray protons and cosmic-ray electrons differently depending on the angle between the upstream magnetic field and the shock normal. In this work, we investigate how shock obliquity affects the production of cosmic rays in high-resolution simulations of galaxy clusters. For this purpose, we performed a magnetohydrodynamical simulation of a galaxy cluster using the mesh refinement code ENZO. We use Lagrangian tracers to follow the properties of the thermal gas, the cosmic rays and the magnetic fields over time. We tested a number of different acceleration scenarios by varying the obliquity-dependent acceleration efficiencies of protons and electrons, and by examining the resulting hadronic γ-ray and radio emission. We find that the radio emission does not change significantly if only quasi-perpendicular shocks are able to accelerate cosmic-ray electrons. Our analysis suggests that radio-emitting electrons found in relics have been typically shocked many times before z = 0. On the other hand, the hadronic γ-ray emission from clusters is found to decrease significantly if only quasi-parallel shocks are allowed to accelerate cosmic ray protons. This might reduce the tension with the low upper limits on γ-ray emission from clusters set by the Fermi satellite.
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowell, A. W.; Boggs, S. E; Chiu, C. L.
2017-10-20
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. Wemore » find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.« less
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
NASA Astrophysics Data System (ADS)
Goodman, Steven N.
1989-11-01
This dissertation explores the use of a mathematical measure of statistical evidence, the log likelihood ratio, in clinical trials. The methods and thinking behind the use of an evidential measure are contrasted with traditional methods of analyzing data, which depend primarily on a p-value as an estimate of the statistical strength of an observed data pattern. It is contended that neither the behavioral dictates of Neyman-Pearson hypothesis testing methods, nor the coherency dictates of Bayesian methods are realistic models on which to base inference. The use of the likelihood alone is applied to four aspects of trial design or conduct: the calculation of sample size, the monitoring of data, testing for the equivalence of two treatments, and meta-analysis--the combining of results from different trials. Finally, a more general model of statistical inference, using belief functions, is used to see if it is possible to separate the assessment of evidence from our background knowledge. It is shown that traditional and Bayesian methods can be modeled as two ends of a continuum of structured background knowledge, methods which summarize evidence at the point of maximum likelihood assuming no structure, and Bayesian methods assuming complete knowledge. Both schools are seen to be missing a concept of ignorance- -uncommitted belief. This concept provides the key to understanding the problem of sampling to a foregone conclusion and the role of frequency properties in statistical inference. The conclusion is that statistical evidence cannot be defined independently of background knowledge, and that frequency properties of an estimator are an indirect measure of uncommitted belief. Several likelihood summaries need to be used in clinical trials, with the quantitative disparity between summaries being an indirect measure of our ignorance. This conclusion is linked with parallel ideas in the philosophy of science and cognitive psychology.
Self-force on an electric dipole in the spacetime of a cosmic string
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muniz, C.R., E-mail: celiomuniz@yahoo.com; Bezerra, V.B., E-mail: valdir@ufpb.br
2014-01-15
We calculate the electrostatic self-force on an electric dipole in the spacetime generated by a static, thin, infinite and straight cosmic string. The electric dipole is held fixed in different configurations, namely, parallel, perpendicular to the cosmic string and oriented along the azimuthal direction around this topological defect, which is stretched along the z axis. We show that the self-force is equivalent to an interaction of the electric dipole with an effective dipole moment which depends on the linear mass density of the cosmic string and on the configuration. The plots of the self-forces as functions of the parameter whichmore » determines the angular deficit of the cosmic string are shown for those different configurations. -- Highlights: •Review of regularized Green’s function applied to the problem. •Self-force on an electric dipole in the string spacetime for some orientations. •Representation via graphs of the self-forces versus angular parameter of the cosmic string. •Self-force induced by the string seen as an interaction between two dipoles. •Discussion about the superposition principle in this non-trivial background.« less
Xenia: A Probe of Cosmic Chemical Evolution
NASA Technical Reports Server (NTRS)
Kouveliotou, Chryssa; Piro, L.
2008-01-01
Xenia is a concept study for a medium-size astrophysical cosmology mission addressing the Cosmic Origins key objective of NASA's Science Plan. The fundamental goal of this objective is to understand the formation and evolution of structures on various scales from the early Universe to the present time (stars, galaxies and the cosmic web). Xenia will use X-and y-ray monitoring and wide field X-ray imaging and high-resolution spectroscopy to collect essential information from three major tracers of these cosmic structures: the Warm Hot Intergalactic Medium (WHIM), Galaxy Clusters and Gamma Ray Bursts (GRBs). Our goal is to trace the chemo-dynamical history of the ubiquitous warm hot diffuse baryon component in the Universe residing in cosmic filaments and clusters of galaxies up to its formation epoch (at z =0-2) and to map star formation and galaxy metal enrichment into the re-ionization era beyond z 6. The concept of Xenia (Greek for "hospitality") evolved in parallel with the Explorer of Diffuse Emission and GRB Explosions (EDGE), a mission proposed by a multinational collaboration to the ESA Cosmic Vision 2015. Xenia incorporates the European and Japanese collaborators into a U.S. led mission that builds on the scientific objectives and technological readiness of EDGE.
Xenia: A Probe of Cosmic Chemical Evolution
NASA Astrophysics Data System (ADS)
Kouveliotou, Chryssa; Piro, L.; Xenia Collaboration
2008-03-01
Xenia is a concept study for a medium-size astrophysical cosmology mission addressing the Cosmic Origins key objective of NASA's Science Plan. The fundamental goal of this objective is to understand the formation and evolution of structures on various scales from the early Universe to the present time (stars, galaxies and the cosmic web). Xenia will use X-and γ-ray monitoring and wide field X-ray imaging and high-resolution spectroscopy to collect essential information from three major tracers of these cosmic structures: the Warm Hot Intergalactic Medium (WHIM), Galaxy Clusters and Gamma Ray Bursts (GRBs). Our goal is to trace the chemo-dynamical history of the ubiquitous warm hot diffuse baryon component in the Universe residing in cosmic filaments and clusters of galaxies up to its formation epoch (at z =0-2) and to map star formation and galaxy metal enrichment into the re-ionization era beyond z 6. The concept of Xenia (Greek for "hospitality") evolved in parallel with the Explorer of Diffuse Emission and GRB Explosions (EDGE), a mission proposed by a multinational collaboration to the ESA Cosmic Vision 2015. Xenia incorporates the European and Japanese collaborators into a U.S. led mission that builds on the scientific objectives and technological readiness of EDGE.
Quantum vacuum interaction between two cosmic strings revisited
NASA Astrophysics Data System (ADS)
Muñoz-Castañeda, J. M.; Bordag, M.
2014-03-01
We reconsider the quantum vacuum interaction energy between two straight parallel cosmic strings. This problem was discussed several times in an approach treating both strings perturbatively and treating only one perturbatively. Here we point out that a simplifying assumption made by Bordag [Ann. Phys. (Berlin) 47, 93 (1990).] can be justified and show that, despite the global character of the background, the perturbative approach delivers a correct result. We consider the applicability of the scattering methods, developed in the past decade for the Casimir effect, for the cosmic string and find it not applicable. We calculate the scattering T-operator on one string. Finally, we consider the vacuum interaction of two strings when each carries a two-dimensional delta function potential.
Influence of the Solar Cycle on Turbulence Properties and Cosmic-Ray Diffusion
NASA Astrophysics Data System (ADS)
Zhao, L.-L.; Adhikari, L.; Zank, G. P.; Hu, Q.; Feng, X. S.
2018-04-01
The solar cycle dependence of various turbulence quantities and cosmic-ray (CR) diffusion coefficients is investigated by using OMNI 1 minute resolution data over 22 years. We employ Elsässer variables z ± to calculate the magnetic field turbulence energy and correlation lengths for both the inwardly and outwardly directed interplanetary magnetic field (IMF). We present the temporal evolution of both large-scale solar wind (SW) plasma variables and small-scale magnetic fluctuations. Based on these observed quantities, we study the influence of solar activity on CR parallel and perpendicular diffusion using quasi-linear theory and nonlinear guiding center theory, respectively. We also evaluate the radial evolution of the CR diffusion coefficients by using the boundary conditions for different solar activity levels. We find that in the ecliptic plane at 1 au (1), the large-scale SW temperature T, velocity V sw, Alfvén speed V A , and IMF magnitude B 0 are positively related to solar activity; (2) the fluctuating magnetic energy density < {{z}+/- }2> , residual energy E D , and corresponding correlation functions all have an obvious solar cycle dependence. The residual energy E D is always negative, which indicates that the energy in magnetic fluctuations is larger than the energy in kinetic fluctuations, especially at solar maximum; (3) the correlation length λ for magnetic fluctuations does not show significant solar cycle variation; (4) the temporally varying shear source of turbulence, which is most important in the inner heliosphere, depends on the solar cycle; (5) small-scale fluctuations may not depend on the direction of the background magnetic field; and (6) high levels of SW fluctuations will increase CR perpendicular diffusion and decrease CR parallel diffusion, but this trend can be masked if the background IMF changes in concert with turbulence in response to solar activity. These results provide quantitative inputs for both turbulence transport models and CR diffusion models, and also provide valuable insight into the long-term modulation of CRs in the heliosphere.
Stability of an optically contacted etalon to cosmic radiation. [aboard Dynamics Explorer satellite
NASA Technical Reports Server (NTRS)
Killeen, T. L.; Dettman, D. L.; Hays, P. B.
1980-01-01
An investigation has been completed to determine the effects of prolonged exposure to cosmic radiation on Zerodur spacing elements used between two dielectric reflectors on silica substrates in the plane Fabry-Perot etalon selected for flight in the Dynamics Explorer satellite. The measured radiation expansion coefficient for Zerodur is approximately -4.0 x 10 to the -12th/rad. In addition to the overall change in gap dimension, test data indicate a degradation in etalon parallelism, which is ascribed to the different doses received by the three spacers due to their differing distances from a Co-60 source. The effect is considered to be of practical use in the tuning and parallelism adjustment of fixed gap etalons. The variation is small enough not to pose a problem for the satellite instrument where expected radiation doses are less than 10,000 rads.
Adriani, O; Barbarino, G C; Bazilevskaya, G A; Bellotti, R; Boezio, M; Bogomolov, E A; Bongi, M; Bonvicini, V; Bottai, S; Bruno, A; Cafagna, F; Campana, D; Carlson, P; Casolino, M; Castellini, G; De Santis, C; Di Felice, V; Galper, A M; Karelin, A V; Koldashov, S V; Koldobskiy, S A; Krutkov, S Y; Kvashnin, A N; Leonov, A; Malakhov, V; Marcelli, L; Martucci, M; Mayorov, A G; Menn, W; Mergé, M; Mikhailov, V V; Mocchiutti, E; Monaco, A; Mori, N; Munini, R; Osteria, G; Panico, B; Papini, P; Pearce, M; Picozza, P; Ricci, M; Ricciarini, S B; Simon, M; Sparvoli, R; Spillantini, P; Stozhkov, Y I; Vacchi, A; Vannuccini, E; Vasilyev, G I; Voronov, S A; Yurkin, Y T; Zampa, G; Zampa, N; Potgieter, M S; Vos, E E
2016-06-17
Cosmic-ray electrons and positrons are a unique probe of the propagation of cosmic rays as well as of the nature and distribution of particle sources in our Galaxy. Recent measurements of these particles are challenging our basic understanding of the mechanisms of production, acceleration, and propagation of cosmic rays. Particularly striking are the differences between the low energy results collected by the space-borne PAMELA and AMS-02 experiments and older measurements pointing to sign-charge dependence of the solar modulation of cosmic-ray spectra. The PAMELA experiment has been measuring the time variation of the positron and electron intensity at Earth from July 2006 to December 2015 covering the period for the minimum of solar cycle 23 (2006-2009) until the middle of the maximum of solar cycle 24, through the polarity reversal of the heliospheric magnetic field which took place between 2013 and 2014. The positron to electron ratio measured in this time period clearly shows a sign-charge dependence of the solar modulation introduced by particle drifts. These results provide the first clear and continuous observation of how drift effects on solar modulation have unfolded with time from solar minimum to solar maximum and their dependence on the particle rigidity and the cyclic polarity of the solar magnetic field.
Lod scores for gene mapping in the presence of marker map uncertainty.
Stringham, H M; Boehnke, M
2001-07-01
Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.
Constraining the phantom braneworld model from cosmic structure sizes
NASA Astrophysics Data System (ADS)
Bhattacharya, Sourav; Kousvos, Stefanos R.
2017-11-01
We consider the phantom braneworld model in the context of the maximum turnaround radius, RTA ,max, of a stable, spherical cosmic structure with a given mass. The maximum turnaround radius is the point where the attraction due to the central inhomogeneity gets balanced with the repulsion of the ambient dark energy, beyond which a structure cannot hold any mass, thereby giving the maximum upper bound on the size of a stable structure. In this work we derive an analytical expression of RTA ,max for this model using cosmological scalar perturbation theory. Using this we numerically constrain the parameter space, including a bulk cosmological constant and the Weyl fluid, from the mass versus observed size data for some nearby, nonvirial cosmic structures. We use different values of the matter density parameter Ωm, both larger and smaller than that of the Λ cold dark matter, as the input in our analysis. We show in particular, that (a) with a vanishing bulk cosmological constant the predicted upper bound is always greater than what is actually observed; a similar conclusion holds if the bulk cosmological constant is negative (b) if it is positive, the predicted maximum size can go considerably below than what is actually observed and owing to the involved nature of the field equations, it leads to interesting constraints on not only the bulk cosmological constant itself but on the whole parameter space of the theory.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
NASA Astrophysics Data System (ADS)
Karakacan Kuzucu, A.; Bektas Balcik, F.
2017-11-01
Accurate and reliable land use/land cover (LULC) information obtained by remote sensing technology is necessary in many applications such as environmental monitoring, agricultural management, urban planning, hydrological applications, soil management, vegetation condition study and suitability analysis. But this information still remains a challenge especially in heterogeneous landscapes covering urban and rural areas due to spectrally similar LULC features. In parallel with technological developments, supplementary data such as satellite-derived spectral indices have begun to be used as additional bands in classification to produce data with high accuracy. The aim of this research is to test the potential of spectral vegetation indices combination with supervised classification methods and to extract reliable LULC information from SPOT 7 multispectral imagery. The Normalized Difference Vegetation Index (NDVI), the Ratio Vegetation Index (RATIO), the Soil Adjusted Vegetation Index (SAVI) were the three vegetation indices used in this study. The classical maximum likelihood classifier (MLC) and support vector machine (SVM) algorithm were applied to classify SPOT 7 image. Catalca is selected region located in the north west of the Istanbul in Turkey, which has complex landscape covering artificial surface, forest and natural area, agricultural field, quarry/mining area, pasture/scrubland and water body. Accuracy assessment of all classified images was performed through overall accuracy and kappa coefficient. The results indicated that the incorporation of these three different vegetation indices decrease the classification accuracy for the MLC and SVM classification. In addition, the maximum likelihood classification slightly outperformed the support vector machine classification approach in both overall accuracy and kappa statistics.
On the Existence and Uniqueness of JML Estimates for the Partial Credit Model
ERIC Educational Resources Information Center
Bertoli-Barsotti, Lucio
2005-01-01
A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…
ERIC Educational Resources Information Center
Paek, Insu; Wilson, Mark
2011-01-01
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Cosmic Explosions, Life in the Universe, and the Cosmological Constant.
Piran, Tsvi; Jimenez, Raul; Cuesta, Antonio J; Simpson, Fergus; Verde, Licia
2016-02-26
Gamma-ray bursts (GRBs) are copious sources of gamma rays whose interaction with a planetary atmosphere can pose a threat to complex life. Using recent determinations of their rate and probability of causing massive extinction, we explore what types of universes are most likely to harbor advanced forms of life. We use cosmological N-body simulations to determine at what time and for what value of the cosmological constant (Λ) the chances of life being unaffected by cosmic explosions are maximized. Life survival to GRBs favors Lambda-dominated universes. Within a cold dark matter model with a cosmological constant, the likelihood of life survival to GRBs is governed by the value of Λ and the age of the Universe. We find that we seem to live in a favorable point in this parameter space that minimizes the exposure to cosmic explosions, yet maximizes the number of main sequence (hydrogen-burning) stars around which advanced life forms can exist.
Cosmic Explosions, Life in the Universe, and the Cosmological Constant
NASA Astrophysics Data System (ADS)
Piran, Tsvi; Jimenez, Raul; Cuesta, Antonio J.; Simpson, Fergus; Verde, Licia
2016-02-01
Gamma-ray bursts (GRBs) are copious sources of gamma rays whose interaction with a planetary atmosphere can pose a threat to complex life. Using recent determinations of their rate and probability of causing massive extinction, we explore what types of universes are most likely to harbor advanced forms of life. We use cosmological N -body simulations to determine at what time and for what value of the cosmological constant (Λ ) the chances of life being unaffected by cosmic explosions are maximized. Life survival to GRBs favors Lambda-dominated universes. Within a cold dark matter model with a cosmological constant, the likelihood of life survival to GRBs is governed by the value of Λ and the age of the Universe. We find that we seem to live in a favorable point in this parameter space that minimizes the exposure to cosmic explosions, yet maximizes the number of main sequence (hydrogen-burning) stars around which advanced life forms can exist.
Acoustic instability driven by cosmic-ray streaming
NASA Technical Reports Server (NTRS)
Begelman, Mitchell C.; Zweibel, Ellen G.
1994-01-01
We study the linear stability of compressional waves in a medium through which cosmic rays stream at the Alfven speed due to strong coupling with Alfven waves. Acoustic waves can be driven unstable by the cosmic-ray drift, provided that the streaming speed is sufficiently large compared to the thermal sound speed. Two effects can cause instability: (1) the heating of the thermal gas due to the damping of Alfven waves driven unstable by cosmic-ray streaming; and (2) phase shifts in the cosmic-ray pressure perturbation caused by the combination of cosmic-ray streaming and diffusion. The instability does not depend on the magnitude of the background cosmic-ray pressure gradient, and occurs whether or not cosmic-ray diffusion is important relative to streaming. When the cosmic-ray pressure is small compared to the gas pressure, or cosmic-ray diffusion is strong, the instability manifests itself as a weak overstability of slow magnetosonic waves. Larger cosmic-ray pressure gives rise to new hybrid modes, which can be strongly unstable in the limits of both weak and strong cosmic-ray diffusion and in the presence of thermal conduction. Parts of our analysis parallel earlier work by McKenzie & Webb (which were brought to our attention after this paper was accepted for publication), but our treatment of diffusive effects, thermal conduction, and nonlinearities represent significant extensions. Although the linear growth rate of instability is independent of the background cosmic-ray pressure gradient, the onset of nonlinear eff ects does depend on absolute value of DEL (vector differential operator) P(sub c). At the onset of nonlinearity the fractional amplitude of cosmic-ray pressure perturbations is delta P(sub C)/P(sub C) approximately (kL) (exp -1) much less than 1, where k is the wavenumber and L is the pressure scale height of the unperturbed cosmic rays. We speculate that the instability may lead to a mode of cosmic-ray transport in which plateaus of uniform cosmic-ray pressure are separated by either laminar or turbulent jumps in which the thermal gas is subject to intense heating.
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
Nagelkerke, Nico; Fidler, Vaclav
2015-01-01
The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.
An estimation of Canadian population exposure to cosmic rays.
Chen, Jing; Timmins, Rachel; Verdecchia, Kyle; Sato, Tatsuhiko
2009-08-01
The worldwide average exposure to cosmic rays contributes to about 16% of the annual effective dose from natural radiation sources. At ground level, doses from cosmic ray exposure depend strongly on altitude, and weakly on geographical location and solar activity. With the analytical model PARMA developed by the Japan Atomic Energy Agency, annual effective doses due to cosmic ray exposure at ground level were calculated for more than 1,500 communities across Canada which cover more than 85% of the Canadian population. The annual effective doses from cosmic ray exposure in the year 2000 during solar maximum ranged from 0.27 to 0.72 mSv with the population-weighted national average of 0.30 mSv. For the year 2006 during solar minimum, the doses varied between 0.30 and 0.84 mSv, and the population-weighted national average was 0.33 mSv. Averaged over solar activity, the Canadian population-weighted average annual effective dose due to cosmic ray exposure at ground level is estimated to be 0.31 mSv.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
Cierniak, Robert; Lorent, Anna
2016-09-01
The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Statistical Bias in Maximum Likelihood Estimators of Item Parameters.
1982-04-01
34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC
ERIC Educational Resources Information Center
Beauducel, Andre; Herzberg, Philipp Yorck
2006-01-01
This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
Convergent evolution of marine mammals is associated with distinct substitutions in common genes
Zhou, Xuming; Seim, Inge; Gladyshev, Vadim N.
2015-01-01
Phenotypic convergence is thought to be driven by parallel substitutions coupled with natural selection at the sequence level. Multiple independent evolutionary transitions of mammals to an aquatic environment offer an opportunity to test this thesis. Here, whole genome alignment of coding sequences identified widespread parallel amino acid substitutions in marine mammals; however, the majority of these changes were not unique to these animals. Conversely, we report that candidate aquatic adaptation genes, identified by signatures of likelihood convergence and/or elevated ratio of nonsynonymous to synonymous nucleotide substitution rate, are characterized by very few parallel substitutions and exhibit distinct sequence changes in each group. Moreover, no significant positive correlation was found between likelihood convergence and positive selection in all three marine lineages. These results suggest that convergence in protein coding genes associated with aquatic lifestyle is mainly characterized by independent substitutions and relaxed negative selection. PMID:26549748
NASA Technical Reports Server (NTRS)
Cliver, E. W.; Ling, A. G.; Richardson, I. G.
2003-01-01
Using a recent classification of the solar wind at 1 AU into its principal components (slow solar wind, high-speed streams, and coronal mass ejections (CMEs) for 1972-2000, we show that the monthly-averaged galactic cosmic ray intensity is anti-correlated with the percentage of time that the Earth is imbedded in CME flows. We suggest that this correlation results primarily from a CME related change in the tail of the distribution function of hourly-averaged values of the solar wind magnetic field (B) between solar minimum and solar maximum. The number of high-B (square proper subset 10 nT) values increases by a factor of approx. 3 from minimum to maximum (from 5% of all hours to 17%), with about two-thirds of this increase due to CMEs. On an hour-to-hour basis, average changes of cosmic ray intensity at Earth become negative for solar wind magnetic field values square proper subset 10 nT.
New test of Lorentz symmetry using ultrahigh-energy cosmic rays
NASA Astrophysics Data System (ADS)
Anchordoqui, Luis A.; Soriano, Jorge F.
2018-02-01
We propose an innovative test of Lorentz symmetry by observing pairs of simultaneous parallel extensive air showers produced by the fragments of ultrahigh-energy cosmic ray nuclei which disintegrated in collisions with solar photons. We show that the search for a cross-correlation of showers in arrival time and direction becomes background free for an angular scale ≲3 ° and a time window O (10 s ) . We also show that if the solar photo-disintegration probability of helium is O (10-5.5) then the hunt for spatiotemporal coincident showers could be within range of existing cosmic ray facilities, such as the Pierre Auger Observatory. We demonstrate that the actual observation of a few events can be used to constrain Lorentz violating dispersion relations of the nucleon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storm, Emma; Weniger, Christoph; Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that aremore » motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.« less
NASA Astrophysics Data System (ADS)
Yang, Changjun; Zhao, Biqiang; Zhu, Jie; Yue, Xinan; Wan, Weixing
2017-10-01
In this study we propose the combination of topside in-situ ion density data from the Communication/Navigation Outage Forecast System (C/NOFS) along with the electron density profile measurement from Constellation Observing System for Meteorology, Ionosphere & Climate (COSMIC) satellites Radio Occultation (RO) for studying the spatial and temporal variations of the ionospheric upper transition height (hT) and the oxygen ion (O+) density scale height. The latitudinal, local time and seasonal distributions of upper transition height show more consistency between hT re-calculated by the profile of the O+ using an α-Chapman function with linearly variable scale height and that determined from direct in-situ ion composition measurements, than with constant scale height and only the COSMIC data. The discrepancy in the values of hT between the C/NOFS measurement and that derived by the combination of COSMIC and C/NOFS satellites observations with variable scale height turns larger as the solar activity decreases, which suggests that the photochemistry and the electrodynamics of the equatorial ionosphere during the extreme solar minimum period produce abnormal structures in the vertical plasma distribution. The diurnal variation of scale heights (Hm) exhibits a minimum after sunrise and a maximum around noon near the geomagnetic equator. Further, the values of Hm exhibit a maximum in the summer hemisphere during daytime, whereas in the winter hemisphere the maximum is during night. Those features of Hm consistently indicate the prominent role of the vertical electromagnetic (E × B) drift in the equatorial ionosphere.
NASA Astrophysics Data System (ADS)
Zhao, Biqiang
2017-04-01
In this study we propose the combination of topside in-situ ion density data from the Communication/Navigation Outage Forecast System (C/NOFS) along with the electron density profile measurement from Constellation Observing System for Meteorology, Ionosphere & Climate (COSMIC) satellites Radio Occultation (RO) for studying the spatial and temporal variations of the ionospheric upper transition height (hT) and the oxygen ion (O+) density scale height. The latitudinal, local time and seasonal distributions of upper transition height show more consistency between hT re-calculated by the profile of the O+ using an a-Chapman function with linearly variable scale height and that determined from direct in-situ ion composition measurements, than with constant scale height and only the COSMIC data. The discrepancy in the values of hT between the C/NOFS measurement and that derived by the combination of COSMIC and C/NOFS satellites observations with variable scale height turns larger as the solar activity decreases, which suggests that the photochemistry and the electrodynamics of the equatorial ionosphere during the extreme solar minimum period produce abnormal structures in the vertical plasma distribution. The diurnal variation of scale heights (Hm) exhibits a minimum after sunrise and a maximum around noon near the geomagnetic equator. Further, the values of Hm exhibit a maximum in the summer hemisphere during daytime, whereas in the winter hemisphere the maximum is during night. Those features of Hm consistently indicate the prominent role of the vertical electromagnetic (E×B) drift in the equatorial ionosphere.
Huang, Chiung-Yu; Qin, Jing
2013-01-01
The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265
NASA Astrophysics Data System (ADS)
Schlickeiser, R.; Oppotsch, J.
2017-12-01
The analytical theory of diffusive acceleration of cosmic rays at parallel stationary shock waves of arbitrary speed with magnetostatic turbulence is developed from first principles. The theory is based on the diffusion approximation to the gyrotropic cosmic-ray particle phase-space distribution functions in the respective rest frames of the up- and downstream medium. We derive the correct cosmic-ray jump conditions for the cosmic-ray current and density, and match the up- and downstream distribution functions at the position of the shock. It is essential to account for the different particle momentum coordinates in the up- and downstream media. Analytical expressions for the momentum spectra of shock-accelerated cosmic rays are calculated. These are valid for arbitrary shock speeds including relativistic shocks. The correctly taken limit for nonrelativistic shock speeds leads to a universal broken power-law momentum spectrum of accelerated particles with velocities well above the injection velocity threshold, where the universal power-law spectral index q≃ 2-{γ }1-4 is independent of the flow compression ratio r. For nonrelativistic shock speeds, we calculate for the first time the injection velocity threshold, settling the long-standing injection problem for nonrelativistic shock acceleration.
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
A Solution to Separation and Multicollinearity in Multiple Logistic Regression
Shen, Jianzhao; Gao, Sujuan
2010-01-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286
A Solution to Separation and Multicollinearity in Multiple Logistic Regression.
Shen, Jianzhao; Gao, Sujuan
2008-10-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.
Evidence for a mixed mass composition at the 'ankle' in the cosmic-ray spectrum
NASA Astrophysics Data System (ADS)
Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; Dallier, R.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; del Peral, L.; Deligny, O.; Di Giulio, C.; Di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; Dorofeev, A.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Glass, H.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Hasankiadeh, Q.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Latronico, L.; Lauscher, M.; Lautridou, P.; Lebrun, P.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Messina, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Molina-Bueno, L.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, G.; Muller, M. A.; Müller, S.; Naranjo, I.; Navas, S.; Nellen, L.; Neuser, J.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollant, R.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Reinert, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Rogozin, D.; Rosado, J.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sanabria Gomez, J. D.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sarmiento-Cano, C.; Sato, R.; Scarso, C.; Schauer, M.; Scherini, V.; Schieler, H.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Taborda, O. A.; Tapia, A.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Torri, M.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valbuena-Delgado, A.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yang, L.; Yelos, D.; Younk, P.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.; Pierre Auger Collaboration
2016-11-01
We report a first measurement for ultrahigh energy cosmic rays of the correlation between the depth of shower maximum and the signal in the water Cherenkov stations of air-showers registered simultaneously by the fluorescence and the surface detectors of the Pierre Auger Observatory. Such a correlation measurement is a unique feature of a hybrid air-shower observatory with sensitivity to both the electromagnetic and muonic components. It allows an accurate determination of the spread of primary masses in the cosmic-ray flux. Up till now, constraints on the spread of primary masses have been dominated by systematic uncertainties. The present correlation measurement is not affected by systematics in the measurement of the depth of shower maximum or the signal in the water Cherenkov stations. The analysis relies on general characteristics of air showers and is thus robust also with respect to uncertainties in hadronic event generators. The observed correlation in the energy range around the 'ankle' at lg (E /eV) = 18.5- 19.0 differs significantly from expectations for pure primary cosmic-ray compositions. A light composition made up of proton and helium only is equally inconsistent with observations. The data are explained well by a mixed composition including nuclei with mass A > 4. Scenarios such as the proton dip model, with almost pure compositions, are thus disfavored as the sole explanation of the ultrahigh-energy cosmic-ray flux at Earth.
Evidence for a mixed mass composition at the ‘ankle’ in the cosmic-ray spectrum
Aab, Alexander
2016-09-28
Here, we report a first measurement for ultra-high energy cosmic rays of the correlation between the depth of shower maximum and the signal in the water Cherenkov stations of air-showers registered simultaneously by the fluorescence and the surface detectors of the Pierre Auger Observatory. Such a correlation measurement is a unique feature of a hybrid air-shower observatory with sensitivity to both the electromagnetic and muonic components. It allows an accurate determination of the spread of primary masses in the cosmic-ray flux. Up till now, constraints on the spread of primary masses have been dominated by systematic uncertainties. The present correlation measurement is not affected by systematics in the measurement of the depth of shower maximum or the signal in the water Cherenkov stations. The analysis relies on general characteristics of air showers and is thus robust also with respect to uncertainties in hadronic event generators. The observed correlation in the energy range around the `ankle' atmore » $$\\lg(E/{\\rm eV})=18.5-19.0$$ differs significantly from expectations for pure primary cosmic-ray compositions. A light composition made up of proton and helium only is equally inconsistent with observations. The data are explained well by a mixed composition including nuclei with mass $A > 4$. Scenarios such as the proton dip model, with almost pure compositions, are thus disfavoured as the sole explanation of the ultrahigh-energy cosmic-ray flux at Earth.« less
A Multi-Variate Fit to the Chemical Composition of the Cosmic-Ray Spectrum
NASA Astrophysics Data System (ADS)
Eisch, Jonathan
Since the discovery of cosmic rays over a century ago, evidence of their origins has remained elusive. Deflected by galactic magnetic fields, the only direct evidence of their origin and propagation remain encoded in their energy distribution and chemical composition. Current models of galactic cosmic rays predict variations of the energy distribution of individual elements in an energy region around 3x1015 eV known as the knee. This work presents a method to measure the energy distribution of individual elemental groups in the knee region and its application to a year of data from the IceCube detector. The method uses cosmic rays detected by both IceTop, the surface-array component, and the deep-ice component of IceCube during the 2009-2010 operation of the IC-59 detector. IceTop is used to measure the energy and the relative likelihood of the mass composition using the signal from the cosmic-ray induced extensive air shower reaching the surface. IceCube, 1.5 km below the surface, measures the energy of the high-energy bundle of muons created in the very first interactions after the cosmic ray enters the atmosphere. These event distributions are fit by a constrained model derived from detailed simulations of cosmic rays representing five chemical elements. The results of this analysis are evaluated in terms of the theoretical uncertainties in cosmic-ray interactions and seasonal variations in the atmosphere. The improvements in high-energy cosmic ray hadronic-interaction models informed by this analysis, combined with increased data from subsequent operation of the IceCube detector, could provide crucial limits on the origin of cosmic rays and their propagation through the galaxy. In the course of developing this method, a number of analysis and statistical techniques were developed to deal with the difficulties inherent in this type of measurement. These include a composition-sensitive air shower reconstruction technique, a method to model simulated event distributions with limited statistics, and a method to optimize and estimate the error on a regularized fit.
Cosmic rays at the ankle: Composition studies using the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Younk, Patrick William
The ankle is a flattening of the cosmic ray energy spectrum at approximately 10 18.5 eV. Its origin is unknown. This thesis investigates the nature of cosmic rays with energy near 10 18.5 eV, and it evaluates two phenomenological models for the ankle feature. Data from the Pierre Auger Observatory is used. Two important calibration studies for the Pierre Auger Observatory are presented: (1) A measurement of the time offset between the surface detector and the fluorescence detector, and (2) A measurement of the fluorescence telescope alignment. The uncertainty on the time offset measurement is 20 ns and the uncertainty on the fluorescence telescope alignment is 0.14°; both uncertainties are within the design specifications of the observatory. Studies to determine the cosmic ray composition mixture near the ankle are presented. Measurements of the average depth of shower maximum suggest that the average particle mass is gradually decreasing between 10 17.8 and 10 18.4 eV and that the average particle mass is steady or slightly increasing between 10 18.5 and 10 19.0 eV. Measurements of the average depth of shower maximum also suggest that the fractional abundance of intermediate weight nuclei such as carbon steadily increases from 10 18 to 10 19 eV. Between 10 18.5 and 10 19.0 eV, the correlation between the depth of shower maximum and the ground level muon density is consistent with a significant fractional abundance of both protons and intermediate weight nuclei. Two popular phenomenological models for the ankle are compared with the above composition results. The first model is that the ankle marks the intersection between a soft galactic spectrum and a hard extragalactic spectrum. The second model is that the ankle is part of a dip in the cosmic ray spectrum (the pair production dip) caused by the attenuation of protons as they travel through intergalactic space. It is demonstrated that the experimental results favor the first model.
THE EFFECT OF A DYNAMIC INNER HELIOSHEATH THICKNESS ON COSMIC-RAY MODULATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manuel, R.; Ferreira, S. E. S.; Potgieter, M. S., E-mail: rexmanuel@live.com
2015-02-01
The time-dependent modulation of galactic cosmic rays in the heliosphere is studied over different polarity cycles by computing 2.5 GV proton intensities using a two-dimensional, time-dependent modulation model. By incorporating recent theoretical advances in the relevant transport parameters in the model, we showed in previous work that this approach gave realistic computed intensities over a solar cycle. New in this work is that a time dependence of the solar wind termination shock (TS) position is implemented in our model to study the effect of a dynamic inner heliosheath thickness (the region between the TS and heliopause) on the solar modulationmore » of galactic cosmic rays. The study reveals that changes in the inner heliosheath thickness, arising from a time-dependent shock position, does affect cosmic-ray intensities everywhere in the heliosphere over a solar cycle, with the smallest effect in the innermost heliosphere. A time-dependent TS position causes a phase difference between the solar activity periods and the corresponding intensity periods. The maximum intensities in response to a solar minimum activity period are found to be dependent on the time-dependent TS profile. It is found that changing the width of the inner heliosheath with time over a solar cycle can shift the time of when the maximum or minimum cosmic-ray intensities occur at various distances throughout the heliosphere, but more significantly in the outer heliosphere. The time-dependent extent of the inner heliosheath, as affected by solar activity conditions, is thus an additional time-dependent factor to be considered in the long-term modulation of cosmic rays.« less
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
ERIC Educational Resources Information Center
Kelderman, Henk
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…
Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key
Batchelder, William H.
2014-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits
Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling
2013-01-01
Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
DOE Office of Scientific and Technical Information (OSTI.GOV)
T'Jampens, Stephane; /Orsay
2006-09-18
This thesis presents the full-angular time-dependent analysis of the vector-vector channel B{sub d}{sup 0} {yields} J/{psi}(K{sub S}{sup 0}{pi}{sup 0})*{sup 0}. After a review of the CP violation in the B meson system, the phenomenology of the charmonium-K*(892) channels is exposed. The method for the measurement of the transversity amplitudes of the B {yields} J/{psi}K*(892), based on a pseudo-likelihood method, is then exposed. The results from a 81.9 fb{sup -1} of collected data by the BABAR detector at the {Upsilon}(4S) resonance peak are |A{sub 0}|{sup 2} = 0.565 {+-} 0.011 {+-} 0.004, |A{sub {parallel}}|{sup 2} = 0.206 {+-} 0.016 {+-} 0.007,more » |A{sub {perpendicular}}|{sup 2} = 0.228 {+-} 0.016 {+-} 0.007, {delta}{sub {parallel}} = -2.766 {+-} 0.105 {+-} 0.040 and {delta}{sub {perpendicular}} = 2.935 {+-} 0.067 {+-} 0.040. Note that ({delta}{sub {parallel}}, {delta}{sub {perpendicular}}) {yields} (-{delta}{sub {parallel}}, {pi} - {delta}{sub {perpendicular}}) is also a solution. The strong phases {delta}{sub {parallel}} and {delta}{sub {perpendicular}} are at {approx}> 3{sigma} from {+-}{pi}, signing the presence of final state interactions and the breakdown of the factorization hypothesis. The forward-backward analysis of the K{pi} mass spectrum revealed the presence of a coherent S-wave interfering with the K*(892). It is the first evidence of this wave in the K{pi} system coming from a B meson. The particularity of the B{sub d}{sup 0} {yields} J/{psi}(K{sub S}{sup 0}{pi}{sup 0})*{sup 0} channel is to have a time-dependent but also an angular distribution which allows to measure sin 2{beta} but also cos2{beta}. The results from an unbinned maximum likelihood fit are sin 2{beta} = -0.10 {+-} 0.57 {+-} 0.14 and cos 2{beta} = 3.32{sub -0.96}{sup +0.76} {+-} 0.27 with the transversity amplitudes fixed to the values given above. The other solution for the strong phases flips the sign of cos 2{beta}. Theoretical considerations based on the s-quark helicity conservation favor the choice of the strong phases given above, leading to a positive sign for cos 2{beta}. The sign of cos 2{beta} is the one predicted by the Standard Model.« less
An Investigation into the Nature of High Altitude Cosmic Radiation in the Stratosphere
ERIC Educational Resources Information Center
Bancroft, Samuel; Bancroft, Ben; Greenwood, Jake
2014-01-01
An experiment was carried out to investigate the changes in ionizing cosmic radiation as a function of altitude. This was carried out using a Geiger-Müller tube on-board a high altitude balloon, which rose to an altitude of 31 685 m. The gathered data show that the Geiger-Müller tube count readings increased to a maximum at an altitude of about 24…
Cosmic radiation dose measurements from the RaD-X flight campaign
NASA Astrophysics Data System (ADS)
Mertens, Christopher J.; Gronoff, Guillaume P.; Norman, Ryan B.; Hayes, Bryan M.; Lusby, Terry C.; Straume, Tore; Tobiska, W. Kent; Hands, Alex; Ryden, Keith; Benton, Eric; Wiley, Scott; Gersey, Brad; Wilkins, Richard; Xu, Xiaojing
2016-10-01
The NASA Radiation Dosimetry Experiment (RaD-X) stratospheric balloon flight mission obtained measurements for improving the understanding of cosmic radiation transport in the atmosphere and human exposure to this ionizing radiation field in the aircraft environment. The value of dosimetric measurements from the balloon platform is that they can be used to characterize cosmic ray primaries, the ultimate source of aviation radiation exposure. In addition, radiation detectors were flown to assess their potential application to long-term, continuous monitoring of the aircraft radiation environment. The RaD-X balloon was successfully launched from Fort Sumner, New Mexico (34.5°N, 104.2°W) on 25 September 2015. Over 18 h of flight data were obtained from each of the four different science instruments at altitudes above 20 km. The RaD-X balloon flight was supplemented by contemporaneous aircraft measurements. Flight-averaged dosimetric quantities are reported at seven altitudes to provide benchmark measurements for improving aviation radiation models. The altitude range of the flight data extends from commercial aircraft altitudes to above the Pfotzer maximum where the dosimetric quantities are influenced by cosmic ray primaries. The RaD-X balloon flight observed an absence of the Pfotzer maximum in the measurements of dose equivalent rate.
Cosmic Radiation Dose Measurements from the RaD-X Flight Campaign
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Gronoff, Guillaume P.; Norman, Ryan B.; Hayes, Bryan M.; Lusby, Terry C.; Straume, Tore; Tobiska, W. Kent; Hands, Alex; Ryden, Keith; Benton, Eric;
2016-01-01
The NASA Radiation Dosimetry Experiment (RaD-X) stratospheric balloon flight mission obtained measurements for improving the understanding of cosmic radiation transport in the atmosphere and human exposure to this ionizing radiation field in the aircraft environment. The value of dosimetric measurements from the balloon platform is that they can be used to characterize cosmic ray primaries, the ultimate source of aviation radiation exposure. In addition, radiation detectors were flown to assess their potential application to long-term, continuous monitoring of the aircraft radiation environment. The RaD-X balloon was successfully launched from Fort Sumner, New Mexico (34.5 degrees North, 104.2 degrees West) on 25 September 2015. Over 18 hours of flight data were obtained from each of the four different science instruments at altitudes above 20 kilometers. The RaD-X balloon flight was supplemented by contemporaneous aircraft measurements. Flight-averaged dosimetric quantities are reported at seven altitudes to provide benchmark measurements for improving aviation radiation models. The altitude range of the flight data extends from commercial aircraft altitudes to above the Pfotzer maximum where the dosimetric quantities are influenced by cosmic ray primaries. The RaD-X balloon flight observed an absence of the Pfotzer maximum in the measurements of dose equivalent rate.
NASA Astrophysics Data System (ADS)
Dorman, L. I.; Pustil'Nik, L. A.; Yom Din, G.
2003-04-01
The database of Professor Rogers (1887), which includes wheat prices in England in the Middle Ages (1249-1703) was used to search for possible manifestations of solar activity and cosmic ray intensity variations. The main object of our statistical analysis is investigation of bursts of prices. Our study shows that bursts and troughs of wheat prices take place at extreme states (maximums or minimums) of solar activity cycles. We present a conceptual model of possible modes for sensitivity of wheat prices to weather conditions, caused by cosmic ray intensity solar cycle variations, and compare the expected price fluctuations with wheat price variations recorded in the Medieval England. We compared statistical properties of the intervals between price bursts with statistical properties of the intervals between extremes (minimums) of solar cycles during the years 1700-2000. The medians of both samples have the values of 11.00 and 10.7 years; standard deviations are 1.44 and 1.53 years for prices and for solar activity, respectively. The hypothesis that the frequency distributions are the same for both of the samples have significance level >95%. In the next step we analyzed direct links between wheat prices and cosmic ray cycle variations in the 17th Century, for which both wheat prices and cosmic ray intensity (derived from Be-10 isotope data) are available. We show that for all seven solar activity minimums (cosmic ray intensity maximums) the observed prices were higher than prices for the seven intervals of maximal solar activity (100% sign correlation). This result, combined with the conclusion of similarity of statistical properties of the price and solar activity extremes can be considered as direct evidence of a causal connection between wheat prices bursts and solar activity/cosmic ray intensity extremes.
NASA Astrophysics Data System (ADS)
Koprowski, M. P.; Dunlop, J. S.; Michałowski, M. J.; Coppin, K. E. K.; Geach, J. E.; McLure, R. J.; Scott, D.; van der Werf, P. P.
2017-11-01
We present a new measurement of the evolving galaxy far-IR luminosity function (LF) extending out to redshifts z ≃ 5, with resulting implications for the level of dust-obscured star formation density in the young Universe. To achieve this, we have exploited recent advances in sub-mm/mm imaging with SCUBA-2 on the James Clerk Maxwell Telescope and the Atacama Large Millimeter/Submillimeter Array, which together provide unconfused imaging with sufficient dynamic range to provide meaningful coverage of the luminosity-redshift plane out to z > 4. Our results support previous indications that the faint-end slope of the far-IR LF is sufficiently flat that comoving luminosity density is dominated by bright objects (≃L*). However, we find that the number density/luminosity of such sources at high redshifts has been severely overestimated by studies that have attempted to push the highly confused Herschel SPIRE surveys beyond z ≃ 2. Consequently, we confirm recent reports that cosmic star formation density is dominated by UV-visible star formation at z > 4. Using both direct (1/Vmax) and maximum likelihood determinations of the LF, we find that its high-redshift evolution is well characterized by continued positive luminosity evolution coupled with negative density evolution (with increasing redshift). This explains why bright sub-mm sources continue to be found at z > 5, even though their integrated contribution to cosmic star formation density at such early times is very small. The evolution of the far-IR galaxy LF thus appears similar in form to that already established for active galactic nuclei, possibly reflecting a similar dependence on the growth of galaxy mass.
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.
2017-08-01
The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.
Influence of Parallel Dark Matter Sectors on Big Bang Nucleosynthesis
NASA Astrophysics Data System (ADS)
Challa, Venkata Sai Sreeharsha
Big Bang Nucleosynthesis (BBN) is a phenomenological theory that describes the synthesis of light nuclei after a few seconds of the cosmic time in the primordial universe. The twelve nuclear reactions in the first few seconds of the cosmic history are constrained by factors such as baryon to photon ratio, number of neutrino families, and present day element abundances. The belief that the expansion of the universe must be slowed down by gravity, was defeated by the recent observation of an accelerated expansion of the universe. Friedmann equations, which describe the cosmic dynamics, need to be revised considering also the existence of dark matter, another recent astronomical observation. The effects of multiple parallel universes of dark matter (dark sectors) on the accelerated expansion of the universe are studied. Collectively, these additional effects will lead to a new cosmological model. We had developed a numerical code on BBN to address the effects of such dark sectors on the abundances of all the light elements. We have studied the effect of degrees of freedom of dark-matter in the early universe on primordial abundances of light elements. The predicted abundances of light elements are compared with observed constraints to obtain bounds on the number of dark sectors, NDM. Comparison of the obtained results with the observations during the BBN epoch shows that the number of dark matter sectors are only loosely constrained, and the dark matter sectors are colder than the ordinary matter sectors. Also, we verified that the existence of parallel dark matter sectors with colder temperatures does not affect the constraints set by observations on the number of neutrino families, Nnu .
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroman, Thomas; Pohl, Martin; Niemiec, Jacek
2012-02-10
There is an observational correlation between astrophysical shocks and nonthermal particle distributions extending to high energies. As a first step toward investigating the possible feedback of these particles on the shock at the microscopic level, we perform particle-in-cell (PIC) simulations of a simplified environment consisting of uniform, interpenetrating plasmas, both with and without an additional population of cosmic rays. We vary the relative density of the counterstreaming plasmas, the strength of a homogeneous parallel magnetic field, and the energy density in cosmic rays. We compare the early development of the unstable spectrum for selected configurations without cosmic rays to themore » growth rates predicted from linear theory, for assurance that the system is well represented by the PIC technique. Within the parameter space explored, we do not detect an unambiguous signature of any cosmic-ray-induced effects on the microscopic instabilities that govern the formation of a shock. We demonstrate that an overly coarse distribution of energetic particles can artificially alter the statistical noise that produces the perturbative seeds of instabilities, and that such effects can be mitigated by increasing the density of computational particles.« less
Dynamics of pairwise motions in the Cosmic Web
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.
2016-10-01
We present results of analysis of the dark matter (DM) pairwise velocity statistics in different Cosmic Web environments. We use the DM velocity and density field from the Millennium 2 simulation together with the NEXUS+ algorithm to segment the simulation volume into voxels uniquely identifying one of the four possible environments: nodes, filaments, walls or cosmic voids. We show that the PDFs of the mean infall velocities v 12 as well as its spatial dependence together with the perpendicular and parallel velocity dispersions bear a significant signal of the large-scale structure environment in which DM particle pairs are embedded. The pairwise flows are notably colder and have smaller mean magnitude in wall and voids, when compared to much denser environments of filaments and nodes. We discuss on our results, indicating that they are consistent with a simple theoretical predictions for pairwise motions as induced by gravitational instability mechanism. Our results indicate that the Cosmic Web elements are coherent dynamical entities rather than just temporal geometrical associations. In addition it should be possible to observationally test various Cosmic Web finding algorithms by segmenting available peculiar velocity data and studying resulting pairwise velocity statistics.
On the regularity of the covariance matrix of a discretized scalar field on the sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilbao-Ahedo, J.D.; Barreiro, R.B.; Herranz, D.
2017-02-01
We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizationsmore » that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ong, S.G.
1964-01-01
At high altitude (3,130 m) tuberculous mice exposed to cosmic radiation under 10 cm of lead showed significantly greater mean survival time and a significantly greater number of survivors than tuberculous mice exposed to direct cosmic radiation. Tuberculous mice exposed to cosmic radiation at high altitude under 10 cm of lead showed a significantly greater mean survival time than tuberculous mice kept at sea level, exposed to direct cosmic radiation, and to cosmic radiation under 1, 2, and 10 cm of lead. The correlation analysis shows that a decreas in lung lesions is associated with an increase in survival time.more » The decrease in lung lesions is associated with an enlargement of the spleen. At high altitude the female showed a significantly greater number of survivors than the male. At sea level no significant difference was observed. On the average the female showed a significantly greater number of survivors. The beneficial effect of daylight with ultraviolet light on tuberculous mice was manifested in a lower maximum of mortality and in a significant decrease of lung and spleen lesions. (auth)« less
Apparent cosmic acceleration from Type Ia supernovae
NASA Astrophysics Data System (ADS)
Dam, Lawrence H.; Heinesen, Asta; Wiltshire, David L.
2017-11-01
Parameters that quantify the acceleration of cosmic expansion are conventionally determined within the standard Friedmann-Lemaître-Robertson-Walker (FLRW) model, which fixes spatial curvature to be homogeneous. Generic averages of Einstein's equations in inhomogeneous cosmology lead to models with non-rigidly evolving average spatial curvature, and different parametrizations of apparent cosmic acceleration. The timescape cosmology is a viable example of such a model without dark energy. Using the largest available supernova data set, the JLA catalogue, we find that the timescape model fits the luminosity distance-redshift data with a likelihood that is statistically indistinguishable from the standard spatially flat Λ cold dark matter cosmology by Bayesian comparison. In the timescape case cosmic acceleration is non-zero but has a marginal amplitude, with best-fitting apparent deceleration parameter, q_{0}=-0.043^{+0.004}_{-0.000}. Systematic issues regarding standardization of supernova light curves are analysed. Cuts of data at the statistical homogeneity scale affect light-curve parameter fits independent of cosmology. A cosmological model dependence of empirical changes to the mean colour parameter is also found. Irrespective of which model ultimately fits better, we argue that as a competitive model with a non-FLRW expansion history, the timescape model may prove a useful diagnostic tool for disentangling selection effects and astrophysical systematics from the underlying expansion history.
Silencing, positive selection and parallel evolution: busy history of primate cytochromes C.
Pierron, Denis; Opazo, Juan C; Heiske, Margit; Papper, Zack; Uddin, Monica; Chand, Gopi; Wildman, Derek E; Romero, Roberto; Goodman, Morris; Grossman, Lawrence I
2011-01-01
Cytochrome c (cyt c) participates in two crucial cellular processes, energy production and apoptosis, and unsurprisingly is a highly conserved protein. However, previous studies have reported for the primate lineage (i) loss of the paralogous testis isoform, (ii) an acceleration and then a deceleration of the amino acid replacement rate of the cyt c somatic isoform, and (iii) atypical biochemical behavior of human cyt c. To gain insight into the cause of these major evolutionary events, we have retraced the history of cyt c loci among primates. For testis cyt c, all primate sequences examined carry the same nonsense mutation, which suggests that silencing occurred before the primates diversified. For somatic cyt c, maximum parsimony, maximum likelihood, and Bayesian phylogenetic analyses yielded the same tree topology. The evolutionary analyses show that a fast accumulation of non-synonymous mutations (suggesting positive selection) occurred specifically on the anthropoid lineage root and then continued in parallel on the early catarrhini and platyrrhini stems. Analysis of evolutionary changes using the 3D structure suggests they are focused on the respiratory chain rather than on apoptosis or other cyt c functions. In agreement with previous biochemical studies, our results suggest that silencing of the cyt c testis isoform could be linked with the decrease of primate reproduction rate. Finally, the evolution of cyt c in the two sister anthropoid groups leads us to propose that somatic cyt c evolution may be related both to COX evolution and to the convergent brain and body mass enlargement in these two anthropoid clades.
Silencing, Positive Selection and Parallel Evolution: Busy History of Primate Cytochromes c
Pierron, Denis; Opazo, Juan C.; Heiske, Margit; Papper, Zack; Uddin, Monica; Chand, Gopi; Wildman, Derek E.; Romero, Roberto; Goodman, Morris; Grossman, Lawrence I.
2011-01-01
Cytochrome c (cyt c) participates in two crucial cellular processes, energy production and apoptosis, and unsurprisingly is a highly conserved protein. However, previous studies have reported for the primate lineage (i) loss of the paralogous testis isoform, (ii) an acceleration and then a deceleration of the amino acid replacement rate of the cyt c somatic isoform, and (iii) atypical biochemical behavior of human cyt c. To gain insight into the cause of these major evolutionary events, we have retraced the history of cyt c loci among primates. For testis cyt c, all primate sequences examined carry the same nonsense mutation, which suggests that silencing occurred before the primates diversified. For somatic cyt c, maximum parsimony, maximum likelihood, and Bayesian phylogenetic analyses yielded the same tree topology. The evolutionary analyses show that a fast accumulation of non-synonymous mutations (suggesting positive selection) occurred specifically on the anthropoid lineage root and then continued in parallel on the early catarrhini and platyrrhini stems. Analysis of evolutionary changes using the 3D structure suggests they are focused on the respiratory chain rather than on apoptosis or other cyt c functions. In agreement with previous biochemical studies, our results suggest that silencing of the cyt c testis isoform could be linked with the decrease of primate reproduction rate. Finally, the evolution of cyt c in the two sister anthropoid groups leads us to propose that somatic cyt c evolution may be related both to COX evolution and to the convergent brain and body mass enlargement in these two anthropoid clades. PMID:22028846
Limits on deeply penetrating particles in the 10(17) eV cosmic ray flux
NASA Technical Reports Server (NTRS)
Baltrusaitis, R. M.; Cassiday, G. L.; Cooper, R.; Elbert, J. W.; Gerhardy, J. W.; Loh, P. R.; Mizumoto, Y.; Sokolsky, P.; Sommers, P.; Steck, D.
1985-01-01
Deeply penetrating particles in the 10 to the 17th power eV cosmic ray flux were investigated. No such events were found in 8.2 x 10 to the 6th power sec of running time. Limits were set on the following: quark-matter in the primary cosmic ray flux; long-lived, weakly interacting particles produced in p-air collisions; the astrophysical neutrino flux. In particular, the neutrino flux limit at 10 to the 17th power eV implies that z, the red shift of maximum activity is 10 in the model of Hill and Schramm.
Distortion of the cosmic background radiation by superconducting strings
NASA Technical Reports Server (NTRS)
Ostriker, J. P.; Thompson, C.
1987-01-01
Superconducting cosmic strings can be significant energy sources, keeping the universe ionized past the commonly assumed epoch of recombination. As a result, the spectrum of the cosmic background radiation is distorted in the presence of heated primordial gas via the Suniaev-Zel'dovich effect. Thiis distortion can be relatively large: the Compton y parameter attains a maximum in the range 0.001-0.005, with these values depending on the mass scale of the string. A significant contribution to y comes from loops decaying at high redshift when the universe is optically thick to Thomson scattering. Moreover, the isotropic spectral distortion is large compared to fluctuations at all angular scales.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
NASA Technical Reports Server (NTRS)
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model
NASA Astrophysics Data System (ADS)
Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel
2011-03-01
This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Li, Min; Tian, Ying; Zhao, Ying; Bu, Wenjun
2012-01-01
Heteroptera, or true bugs, are the largest, morphologically diverse and economically important group of insects with incomplete metamorphosis. However, the phylogenetic relationships within Heteroptera are still in dispute and most of the previous studies were based on morphological characters or with single gene (partial or whole 18S rDNA). Besides, so far, divergence time estimates for Heteroptera totally rely on the fossil record, while no studies have been performed on molecular divergence rates. Here, for the first time, we used maximum parsimony (MP), maximum likelihood (ML) and Bayesian inference (BI) with multiple genes (18S rDNA, 28S rDNA, 16S rDNA and COI) to estimate phylogenetic relationships among the infraorders, and meanwhile, the Penalized Likelihood (r8s) and Bayesian (BEAST) molecular dating methods were employed to estimate divergence time of higher taxa of this suborder. Major results of the present study included: Nepomorpha was placed as the most basal clade in all six trees (MP trees, ML trees and Bayesian trees of nuclear gene data and four-gene combined data, respectively) with full support values. The sister-group relationship of Cimicomorpha and Pentatomomorpha was also strongly supported. Nepomorpha originated in early Triassic and the other six infraorders originated in a very short period of time in middle Triassic. Cimicomorpha and Pentatomomorpha underwent a radiation at family level in Cretaceous, paralleling the proliferation of the flowering plants. Our results indicated that the higher-group radiations within hemimetabolous Heteroptera were simultaneously with those of holometabolous Coleoptera and Diptera which took place in the Triassic. While the aquatic habitat was colonized by Nepomorpha already in the Triassic, the Gerromorpha independently adapted to the semi-aquatic habitat in the Early Jurassic.
Zhao, Ying; Bu, Wenjun
2012-01-01
Heteroptera, or true bugs, are the largest, morphologically diverse and economically important group of insects with incomplete metamorphosis. However, the phylogenetic relationships within Heteroptera are still in dispute and most of the previous studies were based on morphological characters or with single gene (partial or whole 18S rDNA). Besides, so far, divergence time estimates for Heteroptera totally rely on the fossil record, while no studies have been performed on molecular divergence rates. Here, for the first time, we used maximum parsimony (MP), maximum likelihood (ML) and Bayesian inference (BI) with multiple genes (18S rDNA, 28S rDNA, 16S rDNA and COI) to estimate phylogenetic relationships among the infraorders, and meanwhile, the Penalized Likelihood (r8s) and Bayesian (BEAST) molecular dating methods were employed to estimate divergence time of higher taxa of this suborder. Major results of the present study included: Nepomorpha was placed as the most basal clade in all six trees (MP trees, ML trees and Bayesian trees of nuclear gene data and four-gene combined data, respectively) with full support values. The sister-group relationship of Cimicomorpha and Pentatomomorpha was also strongly supported. Nepomorpha originated in early Triassic and the other six infraorders originated in a very short period of time in middle Triassic. Cimicomorpha and Pentatomomorpha underwent a radiation at family level in Cretaceous, paralleling the proliferation of the flowering plants. Our results indicated that the higher-group radiations within hemimetabolous Heteroptera were simultaneously with those of holometabolous Coleoptera and Diptera which took place in the Triassic. While the aquatic habitat was colonized by Nepomorpha already in the Triassic, the Gerromorpha independently adapted to the semi-aquatic habitat in the Early Jurassic. PMID:22384163
Self-energy and self-force in the space-time of a thick cosmic string
NASA Astrophysics Data System (ADS)
Khusnutdinov, N. R.; Bezerra, V. B.
2001-10-01
We calculate the self-energy and self-force for an electrically charged particle at rest in the background of Gott-Hiscock cosmic string space-time. We find the general expression for the self-energy which is expressed in terms of the S matrix of the scattering problem. The self-energy continuously falls down outward from the string's center with the maximum at the origin of the string. The self-force is repulsive for an arbitrary position of the particle. It tends to zero in the string's center and also far from the string and it has a maximum value at the string's surface. The plots of the numerical calculations of the self-energy and self-force are shown.
NASA Technical Reports Server (NTRS)
Van Buren, Dave
1986-01-01
Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.
Model-independent partial wave analysis using a massively-parallel fitting framework
NASA Astrophysics Data System (ADS)
Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.
2017-10-01
The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation
Meyer, Karin
2016-01-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Using the Extended Parallel Process Model to Examine Teachers' Likelihood of Intervening in Bullying
ERIC Educational Resources Information Center
Duong, Jeffrey; Bradshaw, Catherine P.
2013-01-01
Background: Teachers play a critical role in protecting students from harm in schools, but little is known about their attitudes toward addressing problems like bullying. Previous studies have rarely used theoretical frameworks, making it difficult to advance this area of research. Using the Extended Parallel Process Model (EPPM), we examined the…
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
NASA Astrophysics Data System (ADS)
Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.
2015-05-01
We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.
Eddington's demon: inferring galaxy mass functions and other distributions from uncertain data
NASA Astrophysics Data System (ADS)
Obreschkow, D.; Murray, S. G.; Robotham, A. S. G.; Westmeier, T.
2018-03-01
We present a general modified maximum likelihood (MML) method for inferring generative distribution functions from uncertain and biased data. The MML estimator is identical to, but easier and many orders of magnitude faster to compute than the solution of the exact Bayesian hierarchical modelling of all measurement errors. As a key application, this method can accurately recover the mass function (MF) of galaxies, while simultaneously dealing with observational uncertainties (Eddington bias), complex selection functions and unknown cosmic large-scale structure. The MML method is free of binning and natively accounts for small number statistics and non-detections. Its fast implementation in the R-package dftools is equally applicable to other objects, such as haloes, groups, and clusters, as well as observables other than mass. The formalism readily extends to multidimensional distribution functions, e.g. a Choloniewski function for the galaxy mass-angular momentum distribution, also handled by dftools. The code provides uncertainties and covariances for the fitted model parameters and approximate Bayesian evidences. We use numerous mock surveys to illustrate and test the MML method, as well as to emphasize the necessity of accounting for observational uncertainties in MFs of modern galaxy surveys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, C.; Hanany, S.; Baccigalupi, C.
We extend a general maximum likelihood foreground estimation for cosmic microwave background (CMB) polarization data to include estimation of instrumental systematic effects. We focus on two particular effects: frequency band measurement uncertainty and instrumentally induced frequency dependent polarization rotation. We assess the bias induced on the estimation of the B-mode polarization signal by these two systematic effects in the presence of instrumental noise and uncertainties in the polarization and spectral index of Galactic dust. Degeneracies between uncertainties in the band and polarization angle calibration measurements and in the dust spectral index and polarization increase the uncertainty in the extracted CMBmore » B-mode power, and may give rise to a biased estimate. We provide a quantitative assessment of the potential bias and increased uncertainty in an example experimental configuration. For example, we find that with 10% polarized dust, a tensor to scalar ratio of r = 0.05, and the instrumental configuration of the E and B experiment balloon payload, the estimated CMB B-mode power spectrum is recovered without bias when the frequency band measurement has 5% uncertainty or less, and the polarization angle calibration has an uncertainty of up to 4°.« less
X-ray Observations of Cosmic Ray Acceleration
NASA Technical Reports Server (NTRS)
Petre, Robert
2012-01-01
Since the discovery of cosmic rays, detection of their sources has remained elusive. A major breakthrough has come through the identification of synchrotron X-rays from the shocks of supernova remnants through imaging and spectroscopic observations by the most recent generation of X-ray observatories. This radiation is most likely produced by electrons accelerated to relativistic energy, and thus has offered the first, albeit indirect, observational evidence that diffusive shock acceleration in supernova remnants produces cosmic rays to TeV energies, possibly as high as the "knee" in the cosmic ray spectrum. X-ray observations have provided information about the maximum energy to which these shOCks accelerate electrons, as well as indirect evidence of proton acceleration. Shock morphologies measured in X-rays have indicated that a substantial fraction of the shock energy can be diverted into particle acceleration. This presentation will summarize what we have learned about cosmic ray acceleration from X-ray observations of supernova remnants over the past two decades.
NASA Astrophysics Data System (ADS)
Sarkar, Ritabrata; Chakrabarti, Sandip K.; Pal, Partha Sarathi; Bhowmick, Debashis; Bhattacharya, Arnab
2017-09-01
Cosmic ray flux in our planetary system is primarily modulated by solar activity. Radiation effects of cosmic rays on the Earth strongly depend on latitude due to the variation of the geomagnetic field strength. To study these effects we carried out a series of measurements of the radiation characteristics in the atmosphere due to cosmic rays from various places (geomagnetic latitude: ∼14.50°N) in West Bengal, India, located near the Tropic of Cancer, for several years (2012-2016) particularly covering the solar maximum in the 24th solar cycle. We present low energy (15-140 keV) secondary radiation measurement results extending from the ground till the near space (∼40 km) using a scintillator detector on board rubber weather balloons. We also concentrate on the cosmic ray intensity at the Regener-Pfotzer maxima and find a strong anti-correlation between this intensity and the solar activity even at low geomagnetic latitudes.
NASA Astrophysics Data System (ADS)
Agarwal Mishra, Rekha; Mishra, Rajesh Kumar
2016-07-01
Propagation of cosmic rays to and inside the heliosphere, encounter an outward moving solar wind with cyclic magnetic field fluctuation and turbulence, causing convection and diffusion in the heliosphere. Cosmic ray counts from the ground ground-based neutron monitors at different cut of rigidity show intensity changes, which are anti-correlated with sunspot numbers. They also lose energy as they propagate towards the Earth and experience various types of modulations due to different solar activity indices. In this work, we study the first three harmonics of cosmic ray intensity on geo-magnetically quiet days over the period 1965-2014 for Beijing, Moscow and Tokyo neutron monitoring stations located at different cut off rigidity. The amplitude of first harmonic remains high for low cutoff rigidity as compared to high cutoff rigidity on quiet days. The diurnal amplitude significantly decreases during solar activity minimum years. The diurnal time of maximum significantly shifts to an earlier time as compared to the corotational direction having different cutoff rigidities. The time of maximum for first harmonic significantly shifts towards later hours and for second harmonic it shifts towards earlier hours at low cutoff rigidity station as compared to the high cut off rigidity station on quiet days. The amplitude of second/third harmonics shows a good positive correlation with solar wind velocity, while the others (i.e. amplitude and phase) have no significant correlation on quiet days. The amplitude and direction of the anisotropy on quiet days does not show any significant dependence on high-speed solar wind streams for these neutron monitoring stations of different cutoff rigidity threshold. Keywords: cosmic ray, cut off rigidity, quiet days, harmonics, amplitude, phase.
Vector Antenna and Maximum Likelihood Imaging for Radio Astronomy
2016-03-05
Maximum Likelihood Imaging for Radio Astronomy Mary Knapp1, Frank Robey2, Ryan Volz3, Frank Lind3, Alan Fenn2, Alex Morris2, Mark Silver2, Sarah Klein2...haystack.mit.edu Abstract1— Radio astronomy using frequencies less than ~100 MHz provides a window into non-thermal processes in objects ranging from planets...observational astronomy . Ground-based observatories including LOFAR [1], LWA [2], [3], MWA [4], and the proposed SKA-Low [5], [6] are improving access to
A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2012-01-01
This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Tentative Identification of Interstellar Dust in the Magnetic Wall of the Heliosphere
NASA Astrophysics Data System (ADS)
Frisch, Priscilla C.
2005-10-01
Observations of the weak polarization of light from nearby stars, reported by Tinbergen, are consistent with polarization by small (radius <0.14 μm), interstellar dust grains entrained in the magnetic wall of the heliosphere. The region of maximum polarization is toward ecliptic coordinates (λ, β)~(295deg, 0deg), corresponding to (l, b) = (20°, -21°). The direction of maximum polarization is offset along the ecliptic longitude by ~35° from the nose of the heliosphere and extends to low ecliptic latitudes. An offset is also seen between the region with the best-aligned dust grains, λ~281deg-330deg, and the upwind direction of the undeflected large grains, λ~259deg, β~+8deg, which are observed by Ulysses and Galileo to be flowing into the heliosphere. In the aligned-grain region, the strength of polarization anticorrelates with ecliptic latitude, indicating that the magnetic wall is predominantly at negative ecliptic latitudes. An extension of the magnetic wall to β<0deg, formed by the interstellar magnetic field BIS draped over the heliosphere, is consistent with predictions by Linde (1998). A consistent interpretation follows if the maximum-polarization region traces the heliosphere magnetic wall in a direction approximately perpendicular to BIS, while the region of best-aligned dust samples the region where BIS drapes smoothly over the heliosphere with maximum compression. These data are consistent with BIS being tilted by 60° with respect to the ecliptic plane and parallel to the Galactic plane. Interstellar dust grains captured in the heliosheath may also introduce a weak, but important, large-scale contaminant for the cosmic microwave background signal with a symmetry consistent with the relative tilts of BIS and the ecliptic.
NASA Astrophysics Data System (ADS)
Sierra-Porta, D.
2018-07-01
In the present paper a systematic study is carried out to validate the similarity or co-variability between daily terrestrial cosmic-ray intensity and three parameters of the solar corona evolution, i.e., the number of sunspots and flare index observed in the solar corona and the Ap index for regular magnetic field variations caused by regular solar radiation changes. The study is made for a period including three solar cycles starting with cycle 21 (year 1976) and ending on cycle 23 (year 2008). A cross-correlation analysis was used to establish patterns and dependence of the variables. This study focused on the time lag calculation for these variables and found a maximum of negative correlation over CC1≈ 0.85, CC2≈ 0.75 and CC3≈ 0.63 with an estimation of 181, 156 and 2 days of deviation between maximum/minimum of peaks for the intensity of cosmic rays related with sunspot number, flare index and Ap index regression, respectively.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation
NASA Technical Reports Server (NTRS)
Tsou, Haiping; Yan, Tsun-Yee
2000-01-01
This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.
Reyes-Valdés, M H; Stelly, D M
1995-01-01
Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226
Comparisons of neural networks to standard techniques for image classification and correlation
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1994-01-01
Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.
Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda
2016-08-01
With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
DECONV-TOOL: An IDL based deconvolution software package
NASA Technical Reports Server (NTRS)
Varosi, F.; Landsman, W. B.
1992-01-01
There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.
F-8C adaptive flight control laws
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Harvey, C. A.; Stein, G.; Carlson, D. N.; Hendrick, R. C.
1977-01-01
Three candidate digital adaptive control laws were designed for NASA's F-8C digital flyby wire aircraft. Each design used the same control laws but adjusted the gains with a different adaptative algorithm. The three adaptive concepts were: high-gain limit cycle, Liapunov-stable model tracking, and maximum likelihood estimation. Sensors were restricted to conventional inertial instruments (rate gyros and accelerometers) without use of air-data measurements. Performance, growth potential, and computer requirements were used as criteria for selecting the most promising of these candidates for further refinement. The maximum likelihood concept was selected primarily because it offers the greatest potential for identifying several aircraft parameters and hence for improved control performance in future aircraft application. In terms of identification and gain adjustment accuracy, the MLE design is slightly superior to the other two, but this has no significant effects on the control performance achievable with the F-8C aircraft. The maximum likelihood design is recommended for flight test, and several refinements to that design are proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.
2013-10-15
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less
Washeleski, Robert L; Meyer, Edmond J; King, Lyon B
2013-10-01
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.
The cosmic matrix in the 50th anniversary of relativistic astrophysics
NASA Astrophysics Data System (ADS)
Ruffini, R.; Aimuratov, Y.; Becerra, L.; Bianco, C. L.; Karlica, M.; Kovacevic, M.; Melon Fuksman, J. D.; Moradi, R.; Muccino, M.; Penacchioni, A. V.; Pisani, G. B.; Primorac, D.; Rueda, J. A.; Shakeri, S.; Vereshchagin, G. V.; Wang, Y.; Xue, S.-S.
Our concept of induced gravitational collapse (IGC paradigm) starting from a supernova occurring with a companion neutron star, has unlocked the understanding of seven different families of gamma ray bursts (GRBs), indicating a path for the formation of black holes in the universe. An authentic laboratory of relativistic astrophysics has been unveiled in which new paradigms have been introduced in order to advance knowledge of the most energetic, distant and complex systems in our universe. A novel cosmic matrix paradigm has been introduced at a relativistic cosmic level, which parallels the concept of an S-matrix introduced by Feynmann, Wheeler and Heisenberg in the quantum world of microphysics. Here the “in” states are represented by a neutron star and a supernova, while the “out” states, generated within less than a second, are a new neutron star and a black hole. This novel field of research needs very powerful technological observations in all wavelengths ranging from radio through optical, X-ray and gamma ray radiation all the way up to ultra-high-energy cosmic rays.
Crotty, Patrick; García-Bellido, Juan; Lesgourgues, Julien; Riazuelo, Alain
2003-10-24
We obtain very stringent bounds on the possible cold dark matter, baryon, and neutrino isocurvature contributions to the primordial fluctuations in the Universe, using recent cosmic microwave background and large scale structure data. Neglecting the possible effects of spatial curvature, tensor perturbations, and reionization, we perform a Bayesian likelihood analysis with nine free parameters, and find that the amplitude of the isocurvature component cannot be larger than about 31% for the cold dark matter mode, 91% for the baryon mode, 76% for the neutrino density mode, and 60% for the neutrino velocity mode, at 2sigma, for uncorrelated models. For correlated adiabatic and isocurvature components, the fraction could be slightly larger. However, the cross-correlation coefficient is strongly constrained, and maximally correlated/anticorrelated models are disfavored. This puts strong bounds on the curvaton model.
NASA Technical Reports Server (NTRS)
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
Cosmic ray modulation and radiation dose of aircrews during the solar cycle 24/25
NASA Astrophysics Data System (ADS)
Miyake, Shoko; Kataoka, Ryuho; Sato, Tatsuhiko
2017-04-01
Weak solar activity and high cosmic ray flux during the coming solar cycle are qualitatively anticipated by the recent observations that show the decline in the solar activity levels. We predict the cosmic ray modulation and resultant radiation exposure at flight altitude by using the time-dependent and three-dimensional model of the cosmic ray modulation. Our galactic cosmic ray (GCR) model is based on the variations of the solar wind speed, the strength of the heliospheric magnetic field, and the tilt angle of the heliospheric current sheet. We reproduce the 22 year variation of the cosmic ray modulation from 1980 to 2015 taking into account the gradient-curvature drift motion of GCRs. The energy spectra of GCR protons obtained by our model show good agreement with the observations by the Balloon-borne Experiment with a Superconducting magnetic rigidity Spectrometer (BESS) and the Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) except for a discrepancy at the solar maximum. Five-year annual radiation dose around the solar minimum at the solar cycle 24/25 will be approximately 19% higher than that in the last cycle. This is caused by the charge sign dependence of the cosmic ray modulation, such as the flattop profiles in a positive polarity.
Reconstructing matter profiles of spherically compensated cosmic regions in ΛCDM cosmology
NASA Astrophysics Data System (ADS)
de Fromont, Paul; Alimi, Jean-Michel
2018-02-01
The absence of a physically motivated model for large-scale profiles of cosmic voids limits our ability to extract valuable cosmological information from their study. In this paper, we address this problem by introducing the spherically compensated cosmic regions, named CoSpheres. Such cosmic regions are identified around local extrema in the density field and admit a unique compensation radius R1 where the internal spherical mass is exactly compensated. Their origin is studied by extending the standard peak model and implementing the compensation condition. Since the compensation radius evolves as the Universe itself, R1(t) ∝ a(t), CoSpheres behave as bubble Universes with fixed comoving volume. Using the spherical collapse model, we reconstruct their profiles with a very high accuracy until z = 0 in N-body simulations. CoSpheres are symmetrically defined and reconstructed for both central maximum (seeding haloes and galaxies) and minimum (identified with cosmic voids). We show that the full non-linear dynamics can be solved analytically around this particular compensation radius, providing useful predictions for cosmology. This formalism highlights original correlations between local extremum and their large-scale cosmic environment. The statistical properties of these spherically compensated cosmic regions and the possibilities to constrain efficiently both cosmology and gravity will be investigated in companion papers.
Qi, Delin; Chao, Yan; Guo, Songchang; Zhao, Lanying; Li, Taiping; Wei, Fulei; Zhao, Xinquan
2012-01-01
Schizothoracine fishes distributed in the water system of the Qinghai-Tibetan plateau (QTP) and adjacent areas are characterized by being highly adaptive to the cold and hypoxic environment of the plateau, as well as by a high degree of diversity in trophic morphology due to resource polymorphisms. Although convergent and parallel evolution are prevalent in the organisms of the QTP, it remains unknown whether similar evolutionary patterns have occurred in the schizothoracine fishes. Here, we constructed for the first time a tentative molecular phylogeny of the schizothoracine fishes based on the complete sequences of the cytochrome b gene. We employed this molecular phylogenetic framework to examine the evolution of trophic morphologies. We used Pagel's maximum likelihood method to estimate the evolutionary associations of trophic morphologies and food resource use. Our results showed that the molecular and published morphological phylogenies of Schizothoracinae are partially incongruent with respect to some intergeneric relationships. The phylogenetic results revealed that four character states of five trophic morphologies and of food resource use evolved at least twice during the diversification of the subfamily. State transitions are the result of evolutionary patterns including either convergence or parallelism or both. Furthermore, our analyses indicate that some characters of trophic morphologies in the Schizothoracinae have undergone correlated evolution, which are somewhat correlated with different food resource uses. Collectively, our results reveal new examples of convergent and parallel evolution in the organisms of the QTP. The adaptation to different trophic niches through the modification of trophic morphologies and feeding behaviour as found in the schizothoracine fishes may account for the formation and maintenance of the high degree of diversity and radiations in fish communities endemic to QTP. PMID:22470515
Aghanim, N.; Ashdown, M.; Aumont, J.; ...
2016-12-12
This study describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducingmore » significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. Finally, in a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghanim, N.; Ashdown, M.; Aumont, J.
This study describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducingmore » significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. Finally, in a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization.« less
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battye, R.; Benabed, K.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Carron, J.; Challinor, A.; Chiang, H. C.; Colombo, L. P. L.; Combet, C.; Comis, B.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fantaye, Y.; Finelli, F.; Forastieri, F.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Gerbino, M.; Ghosh, T.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Helou, G.; Henrot-Versillé, S.; Herranz, D.; Hivon, E.; Huang, Z.; Ilić, S.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knox, L.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Langer, M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Levrier, F.; Liguori, M.; Lilje, P. B.; López-Caniego, M.; Ma, Y.-Z.; Macías-Pérez, J. F.; Maggio, G.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Matarrese, S.; Mauri, N.; McEwen, J. D.; Meinhold, P. R.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Mottet, S.; Naselsky, P.; Natoli, P.; Oxborrow, C. A.; Pagano, L.; Paoletti, D.; Partridge, B.; Patanchon, G.; Patrizii, L.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Plaszczynski, S.; Polastri, L.; Polenta, G.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Ruiz-Granados, B.; Salvati, L.; Sandri, M.; Savelainen, M.; Scott, D.; Sirri, G.; Sunyaev, R.; Suur-Uski, A.-S.; Tauber, J. A.; Tenti, M.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Valiviita, J.; Van Tent, F.; Vibert, L.; Vielva, P.; Villa, F.; Vittorio, N.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; White, M.; Zacchei, A.; Zonca, A.
2016-12-01
This paper describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducing significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. In a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization.
Inferring Phylogenetic Networks Using PhyloNet.
Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay
2018-07-01
PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.
NASA Technical Reports Server (NTRS)
Pyle, K. R.; Simpson, J. A.
1985-01-01
Near solar maximum, a series of large radial solar wind shocks in June and July 1982 provided a unique opportunity to study the solar modulation of galactic cosmic rays with an array of spacecraft widely separated both in heliocentric radius and longitude. By eliminating hysteresis effects it is possible to begin to separate radial and azimuthal effects in the outer heliosphere. On the large scale, changes in modulation (both the increasing and recovery phases) propagate outward at close to the solar wind velocity, except for the near-term effects of solar wind shocks, which may propagate at a significantly higher velocity. In the outer heliosphere, azimuthal effects are small in comparison with radial effects for large-scale modulation at solar maximum.
Comparison of codes assessing galactic cosmic radiation exposure of aircraft crew.
Bottollier-Depois, J F; Beck, P; Bennett, B; Bennett, L; Bütikofer, R; Clairand, I; Desorgher, L; Dyer, C; Felsberger, E; Flückiger, E; Hands, A; Kindl, P; Latocha, M; Lewis, B; Leuthold, G; Maczka, T; Mares, V; McCall, M J; O'Brien, K; Rollet, S; Rühm, W; Wissmann, F
2009-10-01
The assessment of the exposure to cosmic radiation onboard aircraft is one of the preoccupations of bodies responsible for radiation protection. Cosmic particle flux is significantly higher onboard aircraft than at ground level and its intensity depends on the solar activity. The dose is usually estimated using codes validated by the experimental data. In this paper, a comparison of various codes is presented, some of them are used routinely, to assess the dose received by the aircraft crew caused by the galactic cosmic radiation. Results are provided for periods close to solar maximum and minimum and for selected flights covering major commercial routes in the world. The overall agreement between the codes, particularly for those routinely used for aircraft crew dosimetry, was better than +/-20 % from the median in all but two cases. The agreement within the codes is considered to be fully satisfactory for radiation protection purposes.
Regression estimators for generic health-related quality of life and quality-adjusted life years.
Basu, Anirban; Manca, Andrea
2012-01-01
To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
NASA Technical Reports Server (NTRS)
Puget, J. L.; Stecker, F. W.
1974-01-01
Data from SAS-2 on the galactic gamma ray line flux as a function of longitude is examined. It is shown that the gamma ray emissivity varies with galactocentric distance and is about an order of magnitude higher than the local value in a toroidal region between 4 and 5 kpc from the galactic center. This enhancement is accounted for in part by first-order Fermi acceleration, compression, and trapping of cosmic rays consistent with present ideas of galactic dynamics and galactic structure theory. Calculations indicate that cosmic rays in the 4 to 5 kpc region are trapped and accelerated over a mean time of the order of a few million years or about 2 to 4 times the assumed trapping time in the solar region of the galaxy on the assumption that only an increased cosmic ray flux is responsible for the observed emission. Cosmic ray nucleons, cosmic ray electrons, and ionized hydrogen gas were found to have a strikingly similar distribution in the galaxy according to both the observational data and the theoretical model discussed.
Accurate Structural Correlations from Maximum Likelihood Superpositions
Theobald, Douglas L; Wuttke, Deborah S
2008-01-01
The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization. PMID:28786986
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.
Dynamic rupture modeling of thrust faults with parallel surface traces.
NASA Astrophysics Data System (ADS)
Peshette, P.; Lozos, J.; Yule, D.
2017-12-01
Fold and thrust belts (such as those found in the Himalaya or California Transverse Ranges) consist of many neighboring thrust faults in a variety of geometries. Active thrusts within these belts individually contribute to regional seismic hazard, but further investigation is needed regarding the possibility of multi-fault rupture in a single event. Past analyses of historic thrust surface traces suggest that rupture within a single event can jump up to 12 km. There is also observational precedent for long distance triggering between subparallel thrusts (e.g. the 1997 Harnai, Pakistan events, separated by 50 km). However, previous modeling studies find a maximum jumping rupture distance between thrust faults of merely 200 m. Here, we present a new dynamic rupture modeling parameter study that attempts to reconcile these differences and determine which geometrical and stress conditions promote jumping rupture. We use a community verified 3D finite element method to model rupture on pairs of thrust faults with parallel surface traces. We vary stress drop and fault strength to determine which conditions produce jumping rupture at different dip angles and different separations between surface traces. This parameter study may help to understand the likelihood of jumping rupture in real-world thrust systems, and may thereby improve earthquake hazard assessment.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors
Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.
2009-01-01
In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527
Nucleon-Nucleon Total Cross Section
NASA Technical Reports Server (NTRS)
Norbury, John W.
2008-01-01
The total proton-proton and neutron-proton cross sections currently used in the transport code HZETRN show significant disagreement with experiment in the GeV and EeV energy ranges. The GeV range is near the region of maximum cosmic ray intensity. It is therefore important to correct these cross sections, so that predictions of space radiation environments will be accurate. Parameterizations of nucleon-nucleon total cross sections are developed which are accurate over the entire energy range of the cosmic ray spectrum.
An Alternative Explanation of the Varying Boron-to-carbon Ratio in Galactic Cosmic Rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eichler, David
2017-06-10
It is suggested that the decline with energy of the boron-to-carbon abundance ratio in Galactic cosmic rays is due, in part, to a correlation between the maximum energy attainable by shock acceleration in a given region of the Galactic disk and the grammage traversed before escape. In this case the energy dependence of the escape rate from the Galaxy may be less than previously thought and the spectrum of antiprotons becomes easier to understand.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
A maximum likelihood convolutional decoder model vs experimental data comparison
NASA Technical Reports Server (NTRS)
Chen, R. Y.
1979-01-01
This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.
Salje, Ekhard K H; Planes, Antoni; Vives, Eduard
2017-10-01
Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.
Likelihood-based modification of experimental crystal structure electron density maps
Terwilliger, Thomas C [Sante Fe, NM
2005-04-16
A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.
Indications of proton-dominated cosmic-ray composition above 1.6 EeV.
Abbasi, R U; Abu-Zayyad, T; Al-Seady, M; Allen, M; Amman, J F; Anderson, R J; Archbold, G; Belov, K; Belz, J W; Bergman, D R; Blake, S A; Brusova, O A; Burt, G W; Cannon, C; Cao, Z; Deng, W; Fedorova, Y; Finley, C B; Gray, R C; Hanlon, W F; Hoffman, C M; Holzscheiter, M H; Ivanov, D; Hughes, G; Hüntemeyer, P; Ivanov, D; Jones, B F; Jui, C C H; Kim, K; Kirn, M A; Loh, E C; Liu, J; Lundquist, J P; Maestas, M M; Manago, N; Marek, L J; Martens, K; Matthews, J A J; Matthews, J N; Moore, S A; O'Neill, A; Painter, C A; Perera, L; Reil, K; Riehle, R; Roberts, M; Rodriguez, D; Sasaki, N; Schnetzer, S R; Scott, L M; Sinnis, G; Smith, J D; Sokolsky, P; Song, C; Springer, R W; Stokes, B T; Stratton, S; Thomas, S B; Thomas, J R; Thomson, G B; Tupa, D; Zech, A; Zhang, X
2010-04-23
We report studies of ultrahigh-energy cosmic-ray composition via analysis of depth of air shower maximum (X(max)), for air shower events collected by the High-Resolution Fly's Eye (HiRes) observatory. The HiRes data are consistent with a constant elongation rate d
Particle Acceleration at a Twin CME at 1 AU
NASA Astrophysics Data System (ADS)
Parker, L. N.; Li, G.
2017-12-01
We present results from both the Particle Acceleration and Transport in the Heliosphere (PATH) and Particle Acceleration at Multiple Shocks (PAMS) models for a twin CME scenario. The PATH model follows a CME using a numerical MHD module and solves the Parker transport equation at the shock yielding the accelerated particle spectrum, while PAMS solves the steady-state cosmic ray transport equation at an individual shock analytically to yield the diffusive shock acceleration (DSA) spectrum. We address the injection of an upstream particle distribution into the acceleration process for a two shock system at 1 AU. Only those particles that exceed a theoretically motivated prescribed injection energy, Einj, and up to a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) are injected. Results from PAMS are then compared to observations at 1 AU from the Advanced Composition Explorer (ACE) spacecraft. In addition, we test the concept of electron acceleration at low injection energies for a single and multiple shock system using the same method as in Neergaard Parker and Zank, 2012 and Neergaard Parker et al., 2014.
Cao, Y; Adachi, J; Yano, T; Hasegawa, M
1994-07-01
Graur et al.'s (1991) hypothesis that the guinea pig-like rodents have an evolutionary origin within mammals that is separate from that of other rodents (the rodent-polyphyly hypothesis) was reexamined by the maximum-likelihood method for protein phylogeny, as well as by the maximum-parsimony and neighbor-joining methods. The overall evidence does not support Graur et al.'s hypothesis, which radically contradicts the traditional view of rodent monophyly. This work demonstrates that we must be careful in choosing a proper method for phylogenetic inference and that an argument based on a small data set (with respect to the length of the sequence and especially the number of species) may be unstable.
Simulating cosmic ray physics on a moving mesh
NASA Astrophysics Data System (ADS)
Pfrommer, C.; Pakmor, R.; Schaal, K.; Simpson, C. M.; Springel, V.
2017-03-01
We discuss new methods to integrate the cosmic ray (CR) evolution equations coupled to magnetohydrodynamics on an unstructured moving mesh, as realized in the massively parallel AREPO code for cosmological simulations. We account for diffusive shock acceleration of CRs at resolved shocks and at supernova remnants in the interstellar medium (ISM) and follow the advective CR transport within the magnetized plasma, as well as anisotropic diffusive transport of CRs along the local magnetic field. CR losses are included in terms of Coulomb and hadronic interactions with the thermal plasma. We demonstrate the accuracy of our formalism for CR acceleration at shocks through simulations of plane-parallel shock tubes that are compared to newly derived exact solutions of the Riemann shock-tube problem with CR acceleration. We find that the increased compressibility of the post-shock plasma due to the produced CRs decreases the shock speed. However, CR acceleration at spherically expanding blast waves does not significantly break the self-similarity of the Sedov-Taylor solution; the resulting modifications can be approximated by a suitably adjusted, but constant adiabatic index. In first applications of the new CR formalism to simulations of isolated galaxies and cosmic structure formation, we find that CRs add an important pressure component to the ISM that increases the vertical scaleheight of disc galaxies and thus reduces the star formation rate. Strong external structure formation shocks inject CRs into the gas, but the relative pressure of this component decreases towards halo centres as adiabatic compression favours the thermal over the CR pressure.
Monte Carlo simulations of particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.
1994-01-01
The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.
Cosmic-ray exposure history at Taurus-Littrow
NASA Technical Reports Server (NTRS)
Drozd, R. J.; Hohenberg, C. M.; Morgan, C. J.; Podosek, F. A.; Wroge, M. L.
1977-01-01
Recent surface history at Taurus-Littrow is dominated by emplacement of the Central Cluster and Bright Mantle morphological units, both believed to have resulted from arrival of ejecta from a large primary crater, probably Tycho. This paper reports new noble gas data for eight Apollo 17 rocks. Kr-81 - Kr cosmic ray exposure ages for these rocks affirm the observation of a pronounced grouping of ages, reinforcing the photogeologic evidence for the site-wide nature of the Central Cluster event. The consequences of post-cratering shielding changes are considered and it is concluded that the differences can reasonably be attributed to these changes, particularly because of the greater likelihood of rollover and impact fragmentation of the relatively smaller rocks from which most age data have been obtained. These considerations also lead to a more refined age estimate of 109 plus or minus 4 m.y. for Central Cluster, the Bright Mantle, and Tycho.
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.
1998-07-01
An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.
Performance of internal covariance estimators for cosmic shear correlation functions
Friedrich, O.; Seitz, S.; Eifler, T. F.; ...
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less
Electron Attenuation Measurement using Cosmic Ray Muons at the MicroBooNE LArTPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meddage, Varuna
2017-10-01
The MicroBooNE experiment at Fermilab uses liquid argon time projection chamber (LArTPC) technology to study neutrino interactions in argon. A fundamental requirement for LArTPCs is to achieve and maintain a low level of electronegative contaminants in the liquid to minimize the capture of drifting ionization electrons. The attenuation time for the drifting electrons should be long compared to the maximum drift time, so that the signals from particle tracks that generate ionization electrons with long drift paths can be detected efficiently. In this talk we present MicroBooNE measurement of electron attenuation using cosmic ray muons. The result yields a minimummore » electron 1/e lifetime of 18 ms under typical operating conditions, which is long compared to the maximum drift time of 2.3 ms.« less
Bütikofer, R; Flückiger, E O; Desorgher, L; Moser, M R
2008-03-01
In January 2005 toward the end of solar activity cycle 23 the Sun was very active. Between 15 and 20 January 2005, the solar active region NOAA AR 10720 produced five powerful solar flares. In association with this major solar activity several pronounced variations in the ground-level cosmic ray intensity were observed. The fifth of these flares (X7.1) produced energetic solar cosmic rays that caused a giant increase in the count rates of the ground-based cosmic ray detectors (neutron monitors). At southern polar neutron monitor stations the increase of the count rate reached several thousand percent. From the recordings of the worldwide network of neutron monitors, we determined the characteristics of the solar particle flux near Earth. In the initial phase of the event, the solar cosmic ray flux near Earth was extremely anisotropic. The energy spectrum of the solar cosmic rays was fairly soft during the main and the decay phase. We investigated also the flux of different secondary particle species in the atmosphere and the radiation dosage at flight altitude. Our analysis shows a maximum increment of the effective dose rate due to solar cosmic rays in the south polar region around 70 degrees S and 130 degrees E at flight altitude of almost three orders of magnitude.
The effect of extreme ionization rates during the initial collapse of a molecular cloud core
NASA Astrophysics Data System (ADS)
Wurster, James; Bate, Matthew R.; Price, Daniel J.
2018-05-01
What cosmic ray ionization rate is required such that a non-ideal magnetohydrodynamics (MHD) simulation of a collapsing molecular cloud will follow the same evolutionary path as an ideal MHD simulation or as a purely hydrodynamics simulation? To investigate this question, we perform three-dimensional smoothed particle non-ideal MHD simulations of the gravitational collapse of rotating, one solar mass, magnetized molecular cloud cores, which include Ohmic resistivity, ambipolar diffusion, and the Hall effect. We assume a uniform grain size of ag = 0.1 μm, and our free parameter is the cosmic ray ionization rate, ζcr. We evolve our models, where possible, until they have produced a first hydrostatic core. Models with ζcr ≳ 10-13 s-1 are indistinguishable from ideal MHD models, and the evolution of the model with ζcr = 10-14 s-1 matches the evolution of the ideal MHD model within 1 per cent when considering maximum density, magnetic energy, and maximum magnetic field strength as a function of time; these results are independent of ag. Models with very low ionization rates (ζcr ≲ 10-24 s-1) are required to approach hydrodynamical collapse, and even lower ionization rates may be required for larger ag. Thus, it is possible to reproduce ideal MHD and purely hydrodynamical collapses using non-ideal MHD given an appropriate cosmic ray ionization rate. However, realistic cosmic ray ionization rates approach neither limit; thus, non-ideal MHD cannot be neglected in star formation simulations.
Testing students' e-learning via Facebook through Bayesian structural equation modeling.
Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less
Land cover mapping after the tsunami event over Nanggroe Aceh Darussalam (NAD) province, Indonesia
NASA Astrophysics Data System (ADS)
Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Alias, A. N.; Mohd. Saleh, N.; Wong, C. J.; Surbakti, M. S.
2008-03-01
Remote sensing offers an important means of detecting and analyzing temporal changes occurring in our landscape. This research used remote sensing to quantify land use/land cover changes at the Nanggroe Aceh Darussalam (Nad) province, Indonesia on a regional scale. The objective of this paper is to assess the changed produced from the analysis of Landsat TM data. A Landsat TM image was used to develop land cover classification map for the 27 March 2005. Four supervised classifications techniques (Maximum Likelihood, Minimum Distance-to- Mean, Parallelepiped and Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier) were performed to the satellite image. Training sites and accuracy assessment were needed for supervised classification techniques. The training sites were established using polygons based on the colour image. High detection accuracy (>80%) and overall Kappa (>0.80) were achieved by the Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier in this study. This preliminary study has produced a promising result. This indicates that land cover mapping can be carried out using remote sensing classification method of the satellite digital imagery.
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
Testing students’ e-learning via Facebook through Bayesian structural equation modeling
Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019
DUMBO - A cosmic-ray astrophysics facility in Canada
NASA Astrophysics Data System (ADS)
Hanna, D.
1986-04-01
A deep-underground muon-bundle observatory (DUMBO) is proposed for construction at 700 m depth near Sudbury, Ontario, Canada. The DUMBO design calls for two parallel 3.6 x 21.6-m stacks of multiwire proportional chambers in adjacent mine tunnels (synthesizing a larger-area detector) and a 121-station surface EAS array with variable density to accommodate shower energies in the 100-TeV and 10-PeV ranges. The aims of DUMBO include determining the nuclear composition of cosmic rays, ultrahigh-energy gamma-ray astronomy, and characterizing the point sources of muons observed in recent proton-decay experiments; the physics of these processes and the detector capabilities they imply are discussed. Graphs, diagrams, and drawings are provided.
Fuzzy multinomial logistic regression analysis: A multi-objective programming approach
NASA Astrophysics Data System (ADS)
Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan
2017-05-01
Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Cosmic space and Pauli exclusion principle in a system of M0-branes
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; Saridakis, Emmanuel N.; Bamba, Kazuharu; Sepehri, Alireza; Rahaman, Farook; Ali, Ahmed Farag; Pincak, Richard; Pradhan, Anirudh
An emergence of cosmic space has been suggested by Padmanabhan [Emergence and expansion of cosmic space as due to the quest for holographic equipartition, arXiv:hep-th/1206.4916] where he proposed that the expansion of the universe originates from a difference between the number of degrees of freedom on a holographic surface and the one in the emerged bulk. Now, a natural question that arises is how this proposal would explain the production of fermions and an emergence of the Pauli exclusion principle during the evolution of the universe? We try to address this issue in a system of M0-branes. In this model, there is a high symmetry and the system is composed of M0-branes to which only scalar fields are attached that represent scalar modes of the graviton. Then, when M0-branes join each other and hence form M1-branes, this symmetry is broken and gauge fields are formed. Therefore, these M1-branes interact with the anti-M1-branes and the force between them leads to a break of a symmetry such as the lower and upper parts of these branes are not the same. In these conditions, gauge fields which are localized on M1-branes and scalars which are attached to them symmetrically, decay to fermions with upper and lower spins which attach to the upper and lower parts of the M1-branes anti-symmetrically. The curvature produced by the coupling of identical spins has the opposite sign of the curvature produced by non-identical spins which lead to an attractive force between anti-parallel spins and a repelling force between parallel spins and hence an emergence of the Pauli exclusion principle. By approaching M1-branes to each other, the difference between curvatures of parallel spins and curvatures of anti-parallel spins increases, which leads to an inequality between the number of degrees of freedom on the surface and the one in the emerged bulk and hence lead to an occurrence of the cosmic expansion. By approaching M1-branes to each other, the square of the energy of the system becomes negative and hence tachyonic states arise. To remove these states, M1-branes compactify, the sign of gravity changes and anti-gravity emerges which leads to the branes moving away from each other. By joining M1-branes, M3-branes are produced which are similar to an initial system that oscillates between compacting and opening branches. Our universe is placed on one of these M3-branes and by changing the difference between the amount of couplings between identical and non-identical spins, it contracts or expands.
Ultra heavy cosmic ray experiment (A0178)
NASA Technical Reports Server (NTRS)
Thompson, A.; Osullivan, D.; Bosch, J.; Keegan, R.; Wenzel, K. P.; Jansen, F.; Domingo, C.
1992-01-01
The Ultra Heavy Cosmic Ray Experiment (UHCRE) is based on a modular array of 192 side viewing solid state nuclear track detector stacks. These stacks were mounted in sets of four in 48 pressure vessels using 16 peripheral LDEF trays. The geometry factor for high energy cosmic ray nuclei, allowing for Earth shadowing, was 30 sq m sr, giving a total exposure factor of 170 sq m sr y at an orbital inclination of 28.4 degs. Scanning results indicate that about 3000 cosmic ray nuclei in the charge region with Z greater than 65 were collected. This sample is more than ten times the current world data in the field (taken to be the data set from the HEAO-3 mission plus that from the Ariel-6 mission) and is sufficient to provide the world's first statistically significant sample of actinide cosmic rays. Results are presented including a sample of ultra heavy cosmic ray nuclei, analysis of pre-flight and post-flight calibration events and details of track response in the context of detector temperature history. The integrated effect of all temperature and age related latent track variations cause a maximum charge shift of + or - 0.8e for uranium and + or - 0.6e for the platinum-lead group. Astrophysical implications of the UHCRE charge spectrum are discussed.
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification
NASA Technical Reports Server (NTRS)
Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.
McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J
2018-04-01
Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro
2010-03-01
We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.
F-8C adaptive flight control extensions. [for maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Stein, G.; Hartmann, G. L.
1977-01-01
An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.
NASA Technical Reports Server (NTRS)
Battin, R. H.; Croopnick, S. R.; Edwards, J. A.
1977-01-01
The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.
A 3D approximate maximum likelihood localization solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-09-23
A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
Spacecraft shielding for a Mars mission
NASA Astrophysics Data System (ADS)
O'Brien, K.
Calculations of the effective radiation dose due to cosmic rays in the interplanetary medium between Earth and Mars show that, as in the atmosphere above the Pfotzer Maximum, the dose rate increases with increasing wall thickness. An unshielded space crew member would receive almost 70 rem (0.70 Sv) a year. The effect of a typically proposed composite space-craft hull of aluminum and polyethylene would increase the dose rate by a few percent. However, 100 g/cm2 of almost any light material would more than double the cosmic radiation exposure of the crew.
NASA Technical Reports Server (NTRS)
Puget, J. L.; Stecker, F. W.
1974-01-01
Recent data from SAS-2 on the galactic gamma ray line flux as a function of longitude reveal a broad maximum in the gamma ray intensity in the region absolute value of l approximately smaller than 30 deg. These data imply that the low energy galactic cosmic ray flux varies with galactocentric distance and is about an order of magnitude higher than the local value in a toroidal region between 4 and 5 kpc from the galactic center. This enhancement can be plausibly accounted for by first order Fermi acceleration, compression and trapping of cosmic rays consistent with present ideas of galactic dynamics and galactic structure theory. Calculations indicate that cosmic rays in the 4 to 5 kpc region are trapped and accelerated over a mean time of the order of a few million years or about 2 to 4 times the assumed trapping time in the solar region of the galaxy.
Modulation of galactic and anomalous cosmic rays in the inner heliosphere
NASA Astrophysics Data System (ADS)
Heber, B.
Our knowledge on how galactic and anomalous cosmic rays are modulated in the inner heliosphere has been dramatically enlarged due to measurements provided by several missions launched in the past ten years. The current paradigma of singly charged anomalous cosmic rays has been confirmed by recent measurements from the SAMPEX and ACE satelite. Ulysses explored the inner heliosphere at polar regions during the last solar minimum period and is heading again to high heliographic latitudes during the time of the conference in July, 2000. The Sun approaches maximum activity when the spacecraft is at high heliographic latitudes giving us for the first time the possibility to explore modulation of cosmic rays in the inner three-dimensional heliosphere during such conditions. Ulysses electron measurements in addition to the 1 AU ICE electron and IMP helium measurements allows us to investigate charge sign dependent modulation over a full 22-year solar magnetic cycle. Implications of these observations for our understanding of different modulation processes in the inner three-dimensional heliosphere are presented.
The cosmic radiation in the heliosphere at successive solar minima
NASA Technical Reports Server (NTRS)
Mcdonald, Frank B.; Moraal, Harm; Reinecke, J. P. L.; Lal, Nand; Mcguire, Robert E.
1992-01-01
Cosmic ray observations at 1 AU are compared for the last three solar minimum periods along with the 1977/1989 and 1987 Pioneer 10 and Voyager 1 and 2 data from the outer heliosphere. There is good agreement between the 1965 and 1987 Galactic cosmic ray H and He spectra at 1 AU. Significant and complex differences are found between the 1977/1978 and 1987 measurements of the Galactic and anomalous cosmic ray components at 1 and 15 AU. In the outer heliosphere there are negative latitudinal gradients that reach their maximum magnitude when the inclination of the outer heliosphere current sheet is at a minimum. The radial gradients decrease with heliocentric distance as about 1/r exp 0.7 and do not differ significantly at the successive solar minima. The measured radial and latitudinal gradients are used to estimate the particle transport parameters in the outer heliosphere. Using the local interstellar He spectrum of Webber et al. (1987), it is estimated that the modulation boundary is of the order of 160 AU.
The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.
ERIC Educational Resources Information Center
Blackwood, Larry G.; Bradley, Edwin L.
1989-01-01
Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
A Note on Three Statistical Tests in the Logistic Regression DIF Procedure
ERIC Educational Resources Information Center
Paek, Insu
2012-01-01
Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…
Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data
ERIC Educational Resources Information Center
Xi, Nuo; Browne, Michael W.
2014-01-01
A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...
Reconstruction of interaction rate in holographic dark energy
NASA Astrophysics Data System (ADS)
Mukherjee, Ankan
2016-11-01
The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. It is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.
Radio Detection of Cosmic Rays-Achievements and Future Potential
NASA Astrophysics Data System (ADS)
Huege, Tim
When modern efforts for radio detection of cosmic rays started about a decade ago, hopes were high but the true potential was unknown. Since then, we have achieved a detailed understanding of the radio emission physics and have consequently succeeded in developing sophisticated detection schemes and analysis approaches. In particular, we have demonstrated that the important air-shower parameters arrival direction, particle energy and depth of shower maximum can be reconstructed reliably from radio measurements, with a precision that is comparable with that of other detection techniques. At the same time, limitations inherent to the radio-emission mechanisms have become apparent. In this article, I shortly review the capabilities of radio detection in the very high-frequency band, and discuss the potential for future application in existing and new experiments for cosmic-ray detection.
Indications of Proton-Dominated Cosmic-Ray Composition above 1.6 EeV
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Al-Seady, M.; Allen, M.; Amman, J. F.; Anderson, R. J.; Archbold, G.; Belov, K.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Brusova, O. A.; Burt, G. W.; Cannon, C.; Cao, Z.; Deng, W.; Fedorova, Y.; Finley, C. B.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G.; Hüntemeyer, P.; Jones, B. F.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Loh, E. C.; Liu, J.; Lundquist, J. P.; Maestas, M. M.; Manago, N.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; Moore, S. A.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Rodriguez, D.; Sasaki, N.; Schnetzer, S. R.; Scott, L. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Stratton, S.; Thomas, S. B.; Thomas, J. R.; Thomson, G. B.; Tupa, D.; Zech, A.; Zhang, X.
2010-04-01
We report studies of ultrahigh-energy cosmic-ray composition via analysis of depth of air shower maximum (Xmax), for air shower events collected by the High-Resolution Fly’s Eye (HiRes) observatory. The HiRes data are consistent with a constant elongation rate d⟨Xmax⟩/d[log(E)] of 47.9±6.0(stat)±3.2(syst)g/cm2/decade for energies between 1.6 and 63 EeV, and are consistent with a predominantly protonic composition of cosmic rays when interpreted via the QGSJET01 and QGSJET-II high-energy hadronic interaction models. These measurements constrain models in which the galactic-to-extragalactic transition is the cause of the energy spectrum ankle at 4×1018eV.
Cosmic ray spectrum and composition from three years of IceTop and IceCube
NASA Astrophysics Data System (ADS)
Rawlins, K.;
2016-05-01
IceTop is the surface component of the IceCube Observatory, composed of frozen water tanks at the top of IceCube’s strings. Data from this detector can be analyzed in different ways with the goal of measuring cosmic ray spectrum and composition. The shower size S125 from IceTop alone can be used as a proxy for primary energy, and unfolded into an all-particle spectrum. In addition, S125 from the surface can be combined with high-energy muon energy loss information from the deep IceCube detector for those air showers which pass through both. Using these coincident events in a complementary analysis, both the spectrum and mass composition of primary cosmic rays can be extracted in parallel using a neural network. Both of these analyses have been performed on three years of IceTop and IceCube data. Both all-particle spectra as well as individual spectra for elemental groups are presented.
Cosmic variance of the galaxy cluster weak lensing signal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruen, D.; Seitz, S.; Becker, M. R.
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less
Cosmic variance of the galaxy cluster weak lensing signal
Gruen, D.; Seitz, S.; Becker, M. R.; ...
2015-04-13
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
On alternative q-Weibull and q-extreme value distributions: Properties and applications
NASA Astrophysics Data System (ADS)
Zhang, Fode; Ng, Hon Keung Tony; Shi, Yimin
2018-01-01
Tsallis statistics and Tsallis distributions have been attracting a significant amount of research work in recent years. Importantly, the Tsallis statistics, q-distributions have been applied in different disciplines. Yet, a relationship between some existing q-Weibull distributions and q-extreme value distributions that is parallel to the well-established relationship between the conventional Weibull and extreme value distributions through a logarithmic transformation has not be established. In this paper, we proposed an alternative q-Weibull distribution that leads to a q-extreme value distribution via the q-logarithm transformation. Some important properties of the proposed q-Weibull and q-extreme value distributions are studied. Maximum likelihood and least squares estimation methods are used to estimate the parameters of q-Weibull distribution and their performances are investigated through a Monte Carlo simulation study. The methodologies and the usefulness of the proposed distributions are illustrated by fitting the 2014 traffic fatalities data from The National Highway Traffic Safety Administration.
Kumar, Sudhir; Stecher, Glen; Peterson, Daniel; Tamura, Koichiro
2012-10-15
There is a growing need in the research community to apply the molecular evolutionary genetics analysis (MEGA) software tool for batch processing a large number of datasets and to integrate it into analysis workflows. Therefore, we now make available the computing core of the MEGA software as a stand-alone executable (MEGA-CC), along with an analysis prototyper (MEGA-Proto). MEGA-CC provides users with access to all the computational analyses available through MEGA's graphical user interface version. This includes methods for multiple sequence alignment, substitution model selection, evolutionary distance estimation, phylogeny inference, substitution rate and pattern estimation, tests of natural selection and ancestral sequence inference. Additionally, we have upgraded the source code for phylogenetic analysis using the maximum likelihood methods for parallel execution on multiple processors and cores. Here, we describe MEGA-CC and outline the steps for using MEGA-CC in tandem with MEGA-Proto for iterative and automated data analysis. http://www.megasoftware.net/.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eifler, Tim; Krause, Elisabeth; Dodelson, Scott
2014-05-28
Systematic uncertainties that have been subdominant in past large-scale structure (LSS) surveys are likely to exceed statistical uncertainties of current and future LSS data sets, potentially limiting the extraction of cosmological information. Here we present a general framework (PCA marginalization) to consistently incorporate systematic effects into a likelihood analysis. This technique naturally accounts for degeneracies between nuisance parameters and can substantially reduce the dimension of the parameter space that needs to be sampled. As a practical application, we apply PCA marginalization to account for baryonic physics as an uncertainty in cosmic shear tomography. Specifically, we use CosmoLike to run simulatedmore » likelihood analyses on three independent sets of numerical simulations, each covering a wide range of baryonic scenarios differing in cooling, star formation, and feedback mechanisms. We simulate a Stage III (Dark Energy Survey) and Stage IV (Large Synoptic Survey Telescope/Euclid) survey and find a substantial bias in cosmological constraints if baryonic physics is not accounted for. We then show that PCA marginalization (employing at most 3 to 4 nuisance parameters) removes this bias. Our study demonstrates that it is possible to obtain robust, precise constraints on the dark energy equation of state even in the presence of large levels of systematic uncertainty in astrophysical processes. We conclude that the PCA marginalization technique is a powerful, general tool for addressing many of the challenges facing the precision cosmology program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
Kamneva, Olga K; Rosenberg, Noah A
2017-01-01
Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378
Free energy reconstruction from steered dynamics without post-processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin
2010-09-20
Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less
Master teachers' responses to twenty literacy and science/mathematics practices in deaf education.
Easterbrooks, Susan R; Stephenson, Brenda; Mertens, Donna
2006-01-01
Under a grant to improve outcomes for students who are deaf or hard of hearing awarded to the Association of College Educators--Deaf/Hard of Hearing, a team identified content that all teachers of students who are deaf and hard of hearing must understand and be able to teach. Also identified were 20 practices associated with content standards (10 each, literacy and science/mathematics). Thirty-seven master teachers identified by grant agents rated the practices on a Likert-type scale indicating the maximum benefit of each practice and maximum likelihood that they would use the practice, yielding a likelihood-impact analysis. The teachers showed strong agreement on the benefits and likelihood of use of the rated practices. Concerns about implementation of many of the practices related to time constraints and mixed-ability classrooms were themes of the reviews. Actions for teacher preparation programs were recommended.
Galactic cosmic ray spectral index: the case of Forbush decreases of March 2012
NASA Astrophysics Data System (ADS)
Livada, M.; Mavromichalaki, H.; Plainaki, C.
2018-01-01
During the burst of solar activity in March 2012, close to the maximum of solar cycle 24, a number of X-class and M-class flares and halo CMEs with velocity up to 2684 km/s were recorded. During a relatively short period (7-21 March 2012) two Forbush decreases were registered in the ground-level neutron monitor data. In this work, after a short description of the solar and geomagnetic background of these Forbush decreases, we deduce the cosmic ray density and anisotropy variations based on the daily cosmic ray data of the neutron monitor network (http://www.nmdb.eu; http://cosray.phys.uoa.gr). Applying to our data two different coupling functions methods, the spectral index of these Forbush decreases was calculated following the technique of Wawrzynczak and Alania (Adv. Space Res. 45:622-631, 2010). We pointed out that the estimated values of the spectral index γ of these events are almost similar for both cases following the fluctuation of the Forbush decrease. The study and the calculation of the cosmic ray spectrum during such cosmic ray events are very important for Space Weather applications.
Cosmic Ray Transport in the Distant Heliosheath
NASA Technical Reports Server (NTRS)
Florinski, V.; Adams, James H.; Washimi, H.
2011-01-01
The character of energetic particle transport in the distant heliosheath and especially in the vicinity of the heliopause could be quite distinct from the other regions of the heliosphere. The magnetic field structure is dominated by a tightly wrapped oscillating heliospheric current sheet which is transported to higher latitudes by the nonradial heliosheath flows. Both Voyagers have, or are expected to enter a region dominated by the sectored field formed during the preceding solar maximum. As the plasma flow slows down on approach to the heliopause, the distance between the folds of the current sheet decreases to the point where it becomes comparable to the cyclotron radius of an energetic ion, such as a galactic cosmic ray. Then, a charged particle can effectively drift across a stack of magnetic sectors with a speed comparable with the particle s velocity. Cosmic rays should also be able to efficiently diffuse across the mean magnetic field if the distance between sector boundaries varies. The region of the heliopause could thus be much more permeable to cosmic rays than was previously thought. This new transport proposed mechanism could explain the very high intensities (approaching the model interstellar values) of galactic cosmic rays measured by Voyager 1 during 2010-2011.
Predicting space climate change
NASA Astrophysics Data System (ADS)
Balcerak, Ernie
2011-10-01
Galactic cosmic rays and solar energetic particles can be hazardous to humans in space, damage spacecraft and satellites, pose threats to aircraft electronics, and expose aircrew and passengers to radiation. A new study shows that these threats are likely to increase in coming years as the Sun approaches the end of the period of high solar activity known as “grand solar maximum,” which has persisted through the past several decades. High solar activity can help protect the Earth by repelling incoming galactic cosmic rays. Understanding the past record can help scientists predict future conditions. Barnard et al. analyzed a 9300-year record of galactic cosmic ray and solar activity based on cosmogenic isotopes in ice cores as well as on neutron monitor data. They used this to predict future variations in galactic cosmic ray flux, near-Earth interplanetary magnetic field, sunspot number, and probability of large solar energetic particle events. The researchers found that the risk of space weather radiation events will likely increase noticeably over the next century compared with recent decades and that lower solar activity will lead to increased galactic cosmic ray levels. (Geophysical Research Letters, doi:10.1029/2011GL048489, 2011)
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
Maximum-likelihood estimation of parameterized wavefronts from multifocal data
Sakamoto, Julia A.; Barrett, Harrison H.
2012-01-01
A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282
Cosmic-Ray Extremely Distributed Observatory: a global cosmic ray detection framework
NASA Astrophysics Data System (ADS)
Sushchov, O.; Homola, P.; Dhital, N.; Bratek, Ł.; Poznański, P.; Wibig, T.; Zamora-Saa, J.; Almeida Cheminant, K.; Alvarez Castillo, D.; Góra, D.; Jagoda, P.; Jałocha, J.; Jarvis, J. F.; Kasztelan, M.; Kopański, K.; Krupiński, M.; Michałek, M.; Nazari, V.; Smelcerz, K.; Smolek, K.; Stasielak, J.; Sułek, M.
2017-12-01
The main objective of the Cosmic-Ray Extremely Distributed Observatory (CREDO) is the detection and analysis of extended cosmic ray phenomena, so-called super-preshowers (SPS), using existing as well as new infrastructure (cosmic-ray observatories, educational detectors, single detectors etc.). The search for ensembles of cosmic ray events initiated by SPS is yet an untouched ground, in contrast to the current state-of-the-art analysis, which is focused on the detection of single cosmic ray events. Theoretical explanation of SPS could be given either within classical (e.g., photon-photon interaction) or exotic (e.g., Super Heavy Dark Matter decay or annihilation) scenarios, thus detection of SPS would provide a better understanding of particle physics, high energy astrophysics and cosmology. The ensembles of cosmic rays can be classified based on the spatial and temporal extent of particles constituting the ensemble. Some classes of SPS are predicted to have huge spatial distribution, a unique signature detectable only with a facility of the global size. Since development and commissioning of a completely new facility with such requirements is economically unwarranted and time-consuming, the global analysis goals are achievable when all types of existing detectors are merged into a worldwide network. The idea to use the instruments in operation is based on a novel trigger algorithm: in parallel to looking for neighbour surface detectors receiving the signal simultaneously, one should also look for spatially isolated stations clustered in a small time window. On the other hand, CREDO strategy is also aimed at an active engagement of a large number of participants, who will contribute to the project by using common electronic devices (e.g., smartphones), capable of detecting cosmic rays. It will help not only in expanding the geographical spread of CREDO, but also in managing a large manpower necessary for a more efficient crowd-sourced pattern recognition scheme to identify and classify SPS. A worldwide network of cosmic-ray detectors could not only become a unique tool to study fundamental physics, it will also provide a number of other opportunities, including space-weather or geophysics studies. Among the latter one has to list the potential to predict earthquakes by monitoring the rate of low energy cosmic-ray events. The diversity of goals motivates us to advertise this concept across the astroparticle physics community.
CENTAURUS A AS A POINT SOURCE OF ULTRAHIGH ENERGY COSMIC RAYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hang Bae, E-mail: hbkim@hanyang.ac.kr
We probe the possibility that Centaurus A (Cen A) is a point source of ultrahigh energy cosmic rays (UHECRs) observed by Pierre Auger Observatory (PAO), through the statistical analysis of the arrival direction distribution. For this purpose, we set up the Cen A dominance model for the UHECR sources, in which Cen A contributes the fraction f {sub C} of the whole UHECR with energy above 5.5 Multiplication-Sign 10{sup 19} eV and the isotropic background contributes the remaining 1 - f {sub C} fraction. The effect of the intergalactic magnetic fields on the bending of the trajectory of Cen Amore » originated UHECRs is parameterized by the Gaussian smearing angle {theta} {sub s}. For the statistical analysis, we adopted the correlational angular distance distribution (CADD) for the reduction of the arrival direction distribution and the Kuiper test to compare the observed and the expected CADDs. We identify the excess of UHECRs in the Cen A direction and fit the CADD of the observed PAO data by varying two parameters f {sub C} and {theta} {sub s} of the Cen A dominance model. The best-fit parameter values are f {sub C} Almost-Equal-To 0.1 (the corresponding Cen A fraction observed at PAO is f {sub C,PAO} Almost-Equal-To 0.15, that is, about 10 out of 69 UHECRs) and {theta} {sub s} = 5 Degree-Sign with the maximum likelihood L {sub max} = 0.29. This result supports the existence of a point source smeared by the intergalactic magnetic fields in the direction of Cen A. If Cen A is actually the source responsible for the observed excess of UHECRs, the rms deflection angle of the excess UHECRs implies the order of 10 nG intergalactic magnetic field in the vicinity of Cen A.« less
NASA Astrophysics Data System (ADS)
Clarkson, A.; Hamilton, D. J.; Hoek, M.; Ireland, D. G.; Johnstone, J. R.; Kaiser, R.; Keri, T.; Lumsden, S.; Mahon, D. F.; McKinnon, B.; Murray, M.; Nutbeam-Tuffs, S.; Shearer, C.; Staines, C.; Yang, G.; Zimmerman, C.
2014-05-01
Tomographic imaging techniques using the Coulomb scattering of cosmic-ray muons are increasingly being exploited for the non-destructive assay of shielded containers in a wide range of applications. One such application is the characterisation of legacy nuclear waste materials stored within industrial containers. The design, assembly and performance of a prototype muon tomography system developed for this purpose are detailed in this work. This muon tracker comprises four detection modules, each containing orthogonal layers of Saint-Gobain BCF-10 2 mm-pitch plastic scintillating fibres. Identification of the two struck fibres per module allows the reconstruction of a space point, and subsequently, the incoming and Coulomb-scattered muon trajectories. These allow the container content, with respect to the atomic number Z of the scattering material, to be determined through reconstruction of the scattering location and magnitude. On each detection layer, the light emitted by the fibre is detected by a single Hamamatsu H8500 MAPMT with two fibres coupled to each pixel via dedicated pairing schemes developed to ensure the identification of the struck fibre. The PMT signals are read out to standard charge-to-digital converters and interpreted via custom data acquisition and analysis software. The design and assembly of the detector system are detailed and presented alongside results from performance studies with data collected after construction. These results reveal high stability during extended collection periods with detection efficiencies in the region of 80% per layer. Minor misalignments of millimetre order have been identified and corrected in software. A first image reconstructed from a test configuration of materials has been obtained using software based on the Maximum Likelihood Expectation Maximisation algorithm. The results highlight the high spatial resolution provided by the detector system. Clear discrimination between the low, medium and high-Z materials assayed is also observed.
NASA Astrophysics Data System (ADS)
Doran, Rosa
2015-08-01
In 2015 we celebrate the International Year of Light, a great opportunity to promote awareness about the importance of light coming from the Cosmos and what messages it is bringing to mankind. In parallel a unique moment to attract the attention of stakeholders on the dangers of light pollution and its impact in our lives and our pursuit of more knowledge. In this presentation I want to present one of the conrnerstones of IYL2015, a partnership between the Galileo Teacher Training Program, Universe Awareness and Globe at Night, the Cosmic Light EDU kit. The aim of this project is to assemble a core set of tools and resources representing our basic knowledge pilars about the Universe and simple means to preserve our night sky.
Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H
2003-11-01
The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.
Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.
Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R
2018-05-26
Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Black Hole Spin Evolution and Cosmic Censorship
NASA Astrophysics Data System (ADS)
Chen, W.; Cui, W.; Zhang, S. N.
1999-04-01
We show that the accretion process in X-ray binaries is not likely to spin up or spin down the accreting black holes due to the short lifetime of the system or the lack of sufficient mass supply from the donor star. Therefore, the black hole mass and spin distribution we observe today also reflects that at birth and places interesting constraints on the supernova explosion models across the mass spectrum. On the other hand, it has long been puzzled that accretion from a Keplerian accretion disk with large enough mass supply might spin up the black hole to extremity, thus violate Penrose's cosmic censorship conjecture and the third law of black hole dynamics. This prompted Thorne to propose an astrophysical solution which caps the maximum attainable black hole spin to a value slightly below unity. We show that the black hole will never reach extreme Kerr state under any circumstances by accreting Keplerian angular momentum from the last stable orbit and the cosmic censorship will always be upheld. The maximum black hole spin which can be reached for a fixed, astrophysically meaningful accretion rate is, however, very close to unity, thus the peak spin rate of black holes one can hope to observe from Nature is still 0.998, the Thorne limit.
DSN telemetry system performance using a maximum likelihood convolutional decoder
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Kemp, R. P.
1977-01-01
Results are described of telemetry system performance testing using DSN equipment and a Maximum Likelihood Convolutional Decoder (MCD) for code rates 1/2 and 1/3, constraint length 7 and special test software. The test results confirm the superiority of the rate 1/3 over that of the rate 1/2. The overall system performance losses determined at the output of the Symbol Synchronizer Assembly are less than 0.5 db for both code rates. Comparison of the performance is also made with existing mathematical models. Error statistics of the decoded data are examined. The MCD operational threshold is found to be about 1.96 db.
Pascazio, Vito; Schirinzi, Gilda
2002-01-01
In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.
Soft decoding a self-dual (48, 24; 12) code
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.
Effects of time-shifted data on flight determined stability and control derivatives
NASA Technical Reports Server (NTRS)
Steers, S. T.; Iliff, K. W.
1975-01-01
Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.
Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
Maximum likelihood conjoint measurement of lightness and chroma.
Rogers, Marie; Knoblauch, Kenneth; Franklin, Anna
2016-03-01
Color varies along dimensions of lightness, hue, and chroma. We used maximum likelihood conjoint measurement to investigate how lightness and chroma influence color judgments. Observers judged lightness and chroma of stimuli that varied in both dimensions in a paired-comparison task. We modeled how changes in one dimension influenced judgment of the other. An additive model best fit the data in all conditions except for judgment of red chroma where there was a small but significant interaction. Lightness negatively contributed to perception of chroma for red, blue, and green hues but not for yellow. The method permits quantification of lightness and chroma contributions to color appearance.
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Williams, M S; Ebel, E D; Cao, Y
2013-01-01
The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371
NASA Technical Reports Server (NTRS)
Zhuang, Xin
1990-01-01
LANDSAT Thematic Mapper (TM) data for March 23, 1987 with accompanying ground truth data for the study area in Miami County, IN were used to determine crop residue type and class. Principle components and spectral ratioing transformations were applied to the LANDSAT TM data. One graphic information system (GIS) layer of land ownership was added to each original image as the eighth band of data in an attempt to improve classification. Maximum likelihood, minimum distance, and neural networks were used to classify the original, transformed, and GIS-enhanced remotely sensed data. Crop residues could be separated from one another and from bare soil and other biomass. Two types of crop residue and four classes were identified from each LANDSAT TM image. The maximum likelihood classifier performed the best classification for each original image without need of any transformation. The neural network classifier was able to improve the classification by incorporating a GIS-layer of land ownership as an eighth band of data. The maximum likelihood classifier was unable to consider this eighth band of data and thus, its results could not be improved by its consideration.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-01-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311
NASA Astrophysics Data System (ADS)
de Mendonça, R. R. S.; Braga, C. R.; Echer, E.; Dal Lago, A.; Munakata, K.; Kuwabara, T.; Kozai, M.; Kato, C.; Rockenbach, M.; Schuch, N. J.; Jassar, H. K. Al; Sharma, M. M.; Tokumaru, M.; Duldig, M. L.; Humble, J. E.; Evenson, P.; Sabbah, I.
2016-10-01
The analysis of cosmic ray intensity variation seen by muon detectors at Earth's surface can help us to understand astrophysical, solar, interplanetary and geomagnetic phenomena. However, before comparing cosmic ray intensity variations with extraterrestrial phenomena, it is necessary to take into account atmospheric effects such as the temperature effect. In this work, we analyzed this effect on the Global Muon Detector Network (GMDN), which is composed of four ground-based detectors, two in the northern hemisphere and two in the southern hemisphere. In general, we found a higher temperature influence on detectors located in the northern hemisphere. Besides that, we noticed that the seasonal temperature variation observed at the ground and at the altitude of maximum muon production are in antiphase for all GMDN locations (low-latitude regions). In this way, contrary to what is expected in high-latitude regions, the ground muon intensity decrease occurring during summertime would be related to both parts of the temperature effect (the negative and the positive). We analyzed several methods to describe the temperature effect on cosmic ray intensity. We found that the mass weighted method is the one that best reproduces the seasonal cosmic ray variation observed by the GMDN detectors and allows the highest correlation with long-term variation of the cosmic ray intensity seen by neutron monitors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Mendonça, R. R. S.; Braga, C. R.; Echer, E.
2016-10-20
The analysis of cosmic ray intensity variation seen by muon detectors at Earth's surface can help us to understand astrophysical, solar, interplanetary and geomagnetic phenomena. However, before comparing cosmic ray intensity variations with extraterrestrial phenomena, it is necessary to take into account atmospheric effects such as the temperature effect. In this work, we analyzed this effect on the Global Muon Detector Network (GMDN), which is composed of four ground-based detectors, two in the northern hemisphere and two in the southern hemisphere. In general, we found a higher temperature influence on detectors located in the northern hemisphere. Besides that, we noticedmore » that the seasonal temperature variation observed at the ground and at the altitude of maximum muon production are in antiphase for all GMDN locations (low-latitude regions). In this way, contrary to what is expected in high-latitude regions, the ground muon intensity decrease occurring during summertime would be related to both parts of the temperature effect (the negative and the positive). We analyzed several methods to describe the temperature effect on cosmic ray intensity. We found that the mass weighted method is the one that best reproduces the seasonal cosmic ray variation observed by the GMDN detectors and allows the highest correlation with long-term variation of the cosmic ray intensity seen by neutron monitors.« less
EXPOSE-E: an ESA astrobiology mission 1.5 years in space.
Rabbow, Elke; Rettberg, Petra; Barczyk, Simon; Bohmeier, Maria; Parpart, André; Panitz, Corinna; Horneck, Gerda; von Heise-Rotenburg, Ralf; Hoppenbrouwers, Tom; Willnecker, Rainer; Baglioni, Pietro; Demets, René; Dettmann, Jan; Reitz, Guenther
2012-05-01
The multi-user facility EXPOSE-E was designed by the European Space Agency to enable astrobiology research in space (low-Earth orbit). On 7 February 2008, EXPOSE-E was carried to the International Space Station (ISS) on the European Technology Exposure Facility (EuTEF) platform in the cargo bay of Space Shuttle STS-122 Atlantis. The facility was installed at the starboard cone of the Columbus module by extravehicular activity, where it remained in space for 1.5 years. EXPOSE-E was returned to Earth with STS-128 Discovery on 12 September 2009 for subsequent sample analysis. EXPOSE-E provided accommodation in three exposure trays for a variety of astrobiological test samples that were exposed to selected space conditions: either to space vacuum, solar electromagnetic radiation at >110 nm and cosmic radiation (trays 1 and 3) or to simulated martian surface conditions (tray 2). Data on UV radiation, cosmic radiation, and temperature were measured every 10 s and downlinked by telemetry. A parallel mission ground reference (MGR) experiment was performed on ground with a parallel set of hardware and samples under simulated space conditions. EXPOSE-E performed a successful 1.5-year mission in space.
NASA Astrophysics Data System (ADS)
Zhao, L.; Zhang, H.
2014-12-01
Anomalous cosmic rays (ACRs) carry crucial information on the coupling between solar wind and interstellar medium, as well as cosmic ray modulation within the heliosphere. Due to the distinct origins and modulation processes, the spectra and abundance of ACRs are significantly different from that of galactic cosmic rays (GCRs). Since the launch of NASA's ACE spacecraft in 1997, its CRIS and SIS instruments have continuously recorded GCR and ACR intensities of several elemental heavy-ions, spanning the whole cycle 23 and the cycle 24 maximum. Here we present a statistical comparison of ACR and GCR observed by ACE spacecraft and their possible relation to solar activity. While the differential flux of ACR also exhibits apparent anti-correlation with solar activity level, the flux of the latest prolonged solar minimum (year 2009) is approximately 5% lower than its previous solar minimum (year 1997). And the minimal level of ACR flux appears in year 2004, instead of year 2001 with the strongest solar activities. The negative indexes of the power law spectra within the energy range from 5 to 30 MeV/nuc also vary with time. The spectra get harder during the solar minimum but softer during the solar maximum. The approaching solar minimum of cycle 24 is believed to resemble the Dalton or Gleissberg Minimum with extremely low solar activity (Zolotova and Ponyavin, 2014). Therefore, the different characteristics of ACRs between the coming solar minimum and the previous minimum are also of great interest. Finally, we will also discuss the possible solar-modulation processes which is responsible for different modulation of ACR and GCR, especially the roles played by diffusion and drifts. The comparative analysis will provide valuable insights into the physical modulation process within the heliosphere under opposite solar polarity and variable solar activity levels.
An annealed chaotic maximum neural network for bipartite subgraph problem.
Wang, Jiahai; Tang, Zheng; Wang, Ronglong
2004-04-01
In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.
Phylogenetically marking the limits of the genus Fusarium for post-Article 59 usage
USDA-ARS?s Scientific Manuscript database
Fusarium (Hypocreales, Nectriaceae) is one of the most important and systematically challenging groups of mycotoxigenic, plant pathogenic, and human pathogenic fungi. We conducted maximum likelihood (ML), maximum parsimony (MP) and Bayesian (B) analyses on partial nucleotide sequences of genes encod...
Hühn, M
1995-05-01
Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.
Search For Cosmic-Ray-Induced Gamma-Ray Emission In Galaxy Clusters
Ackermann, M.
2014-04-30
Current theories predict relativistic hadronic particle populations in clusters of galaxies in addition to the already observed relativistic leptons. In these scenarios hadronic interactions give rise to neutral pions which decay into rays that are potentially observable with the Large Area Telescope (LAT) on board the Fermi space telescope. We present a joint likelihood analysis searching for spatially extended γ-ray emission at the locations of 50 galaxy clusters in 4 years of Fermi-LAT data under the assumption of the universal cosmic-ray model proposed by Pinzke & Pfrommer (2010). We find an excess at a significance of 2.7 σ which uponmore » closer inspection is however correlated to individual excess emission towards three galaxy clusters: Abell 400, Abell 1367 and Abell 3112. We discuss these cases in detail and conservatively attribute the emission to unmodeled background (for example, radio galaxies within the clusters). Through the combined analysis of 50 clusters we exclude hadronic injection efficiencies in simple hadronic models above 21% and establish limits on the cosmic-ray to thermal pressure ratio within the virial radius, R200, to be below 1.2-1.4% depending on the morphological classification. In addition we derive new limits on the γ-ray flux from individual clusters in our sample.« less
NASA Astrophysics Data System (ADS)
Drake, A. B.; Garel, T.; Wisotzki, L.; Leclercq, F.; Hashimoto, T.; Richard, J.; Bacon, R.; Blaizot, J.; Caruana, J.; Conseil, S.; Contini, T.; Guiderdoni, B.; Herenz, E. C.; Inami, H.; Lewis, J.; Mahler, G.; Marino, R. A.; Pello, R.; Schaye, J.; Verhamme, A.; Ventou, E.; Weilbacher, P. M.
2017-11-01
We present the deepest study to date of the Lyα luminosity function in a blank field using blind integral field spectroscopy from MUSE. We constructed a sample of 604 Lyα emitters (LAEs) across the redshift range 2.91 < z < 6.64 using automatic detection software in the Hubble Ultra Deep Field. The deep data cubes allowed us to calculate accurate total Lyα fluxes capturing low surface-brightness extended Lyα emission now known to be a generic property of high-redshift star-forming galaxies. We simulated realistic extended LAEs to fully characterise the selection function of our samples, and performed flux-recovery experiments to test and correct for bias in our determination of total Lyα fluxes. We find that an accurate completeness correction accounting for extended emission reveals a very steep faint-end slope of the luminosity function, α, down to luminosities of log10L erg s-1< 41.5, applying both the 1 /Vmax and maximum likelihood estimators. Splitting the sample into three broad redshift bins, we see the faint-end slope increasing from -2.03-0.07+ 1.42 at z ≈ 3.44 to -2.86-∞+0.76 at z ≈ 5.48, however no strong evolution is seen between the 68% confidence regions in L∗-α parameter space. Using the Lyα line flux as a proxy for star formation activity, and integrating the observed luminosity functions, we find that LAEs' contribution to the cosmic star formation rate density rises with redshift until it is comparable to that from continuum-selected samples by z ≈ 6. This implies that LAEs may contribute more to the star-formation activity of the early Universe than previously thought, as any additional intergalactic medium (IGM) correction would act to further boost the Lyα luminosities. Finally, assuming fiducial values for the escape of Lyα and LyC radiation, and the clumpiness of the IGM, we integrated the maximum likelihood luminosity function at 5.00
NASA Technical Reports Server (NTRS)
Elkin, D.; Abeyagunawardene, S.; Defazio, R.
1988-01-01
The ejection of appendages with uncertain drag characteristics presents a concern for eventual recontact. Recontact shortly after release can be prevented by avoiding ejection in a plane perpendicular to the velocity. For ejection tangential to the orbit, the likelihood of recontact within a year is high in the absence of drag and oblateness. The optimum direction of ejection of the thermal shield cable and an overestimate of the recontact probability are determined for the Cosmic Background Explorer (COBE) mission when drag, oblateness, and solar/lunar perturbations are present. The probability is small but possibly significant.
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
PyEvolve: a toolkit for statistical modelling of molecular evolution.
Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A
2004-01-05
Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.
Poisson point process modeling for polyphonic music transcription.
Peeling, Paul; Li, Chung-fai; Godsill, Simon
2007-04-01
Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.
Topological interactions in spacetimes with thick line defects
NASA Astrophysics Data System (ADS)
Moraes, Fernando; Carvalho, A. M.; Costa, Ismael V.; Oliveira, F. A.; Furtado, Claudio
2003-08-01
In this work we study the topologically induced electric self-energy and self-force on a long, straight, wire in two distinct, but similar, spacetimes: (i) the Gott-Hiscock thick cosmic string spacetime, and (ii) the spacetime of a continuous distribution of infinitely thin cosmic strings over a disk of finite radius. In each case we obtain the electric self-energy and self-force both in the internal and external regions of the defect distribution. The self-force is always repulsive, independently of the sign of the charge, and is maximum on the string’s surface, in both cases.
Gamma-ray bursts from superconducting cosmic strings at large redshifts
NASA Technical Reports Server (NTRS)
Babul, Arif; Paczynski, Bohdan; Spergel, David
1987-01-01
The relation between cusp events and gamma-rays bursts is investigated. The optical depth of the universe to X-rays and gamma-rays of various energies is calculated and discussed. The cosmological evolution of cosmic strings is examined, and the energetics and time-scales related to the cusp phenomena are estimated. It is noted that it is possible to have energy bursts with a duration of a few seconds or less from cusps at z = 1000; the maximum amount of energy associated with such an event is limited to 10 to the 7th ergs/sq cm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freon, A.
1962-01-01
It is well known that 27-day recurrent variation of cosmic radiation presents long periods of stability in correlation with long-life high-activity regions of the sun. These variations have been previously studied during the last solar activity maximum (Oct. 1956 to Dec. 1958) using data from two neutron monitors located at Pic-du-Midi, France, and Port-aux-Francais, Kerguelen Island. Results are presented from a study of these recurrent variations for the Jan. 1955 to Jan. 1961 period. (W.D.M.)
Solar cosmic ray hazard to interplanetary and earth-orbital space travel
NASA Technical Reports Server (NTRS)
Yucker, W. R.
1972-01-01
A statistical treatment of the radiation hazards to astronauts due to solar cosmic ray protons is reported to determine shielding requirements for solar proton events. More recent data are incorporated into the present analysis in order to improve the accuracy of the predicted mission fluence and dose. The effects of the finite data sample are discussed. Mission fluence and dose versus shield thickness data are presented for mission lengths up to 3 years during periods of maximum and minimum solar activity; these correspond to various levels of confidence that the predicted hazard will not be exceeded.
Radio detection of cosmic-ray air showers and high-energy neutrinos
NASA Astrophysics Data System (ADS)
Schröder, Frank G.
2017-03-01
In the last fifteen years radio detection made it back to the list of promising techniques for extensive air showers, firstly, due to the installation and successful operation of digital radio experiments and, secondly, due to the quantitative understanding of the radio emission from atmospheric particle cascades. The radio technique has an energy threshold of about 100 PeV, which coincides with the energy at which a transition from the highest-energy galactic sources to the even more energetic extragalactic cosmic rays is assumed. Thus, radio detectors are particularly useful to study the highest-energy galactic particles and ultra-high-energy extragalactic particles of all types. Recent measurements by various antenna arrays like LOPES, CODALEMA, AERA, LOFAR, Tunka-Rex, and others have shown that radio measurements can compete in precision with other established techniques, in particular for the arrival direction, the energy, and the position of the shower maximum, which is one of the best estimators for the composition of the primary cosmic rays. The scientific potential of the radio technique seems to be maximum in combination with particle detectors, because this combination of complementary detectors can significantly increase the total accuracy for air-shower measurements. This increase in accuracy is crucial for a better separation of different primary particles, like gamma-ray photons, neutrinos, or different types of nuclei, because showers initiated by these particles differ in average depth of the shower maximum and in the ratio between the amplitude of the radio signal and the number of muons. In addition to air-shower measurements, the radio technique can be used to measure particle cascades in dense media, which is a promising technique for detection of ultra-high-energy neutrinos. Several pioneering experiments like ARA, ARIANNA, and ANITA are currently searching for the radio emission by neutrino-induced particle cascades in ice. In the next years these two sub-fields of radio detection of cascades in air and in dense media will likely merge, because several future projects aim at the simultaneous detection of both, high-energy cosmic-rays and neutrinos. SKA will search for neutrino and cosmic-ray initiated cascades in the lunar regolith and simultaneously provide unprecedented detail for air-shower measurements. Moreover, detectors with huge exposure like GRAND, SWORD or EVA are being considered to study the highest energy cosmic rays and neutrinos. This review provides an introduction to the physics of radio emission by particle cascades, an overview on the various experiments and their instrumental properties, and a summary of methods for reconstructing the most important air-shower properties from radio measurements. Finally, potential applications of the radio technique in high-energy astroparticle physics are discussed.